amazon-research/patchcore-inspection - GitHub

文章推薦指數: 80 %
投票人數:10人

Towards Total Recall in Industrial Anomaly Detection. This repository contains the implementation for PatchCore as proposed in Roth et al. Skiptocontent {{message}} amazon-research / patchcore-inspection Public Notifications Fork 20 Star 143 License Apache-2.0license 143 stars 20 forks Star Notifications Code Issues 15 Pullrequests 0 Actions Projects 0 Wiki Security Insights More Code Issues Pullrequests Actions Projects Wiki Security Insights amazon-research/patchcore-inspection Thiscommitdoesnotbelongtoanybranchonthisrepository,andmaybelongtoaforkoutsideoftherepository. main Branches Tags Couldnotloadbranches Nothingtoshow {{refName}} default Couldnotloadtags Nothingtoshow {{refName}} default 1 branch 0 tags Code Latestcommit petergehler Mergepullrequest#11fromCveinnt/main … 6a9a281 Jun17,2022 Mergepullrequest#11fromCveinnt/main Addedfixtoissue#3andfixedminorspellingerrors 6a9a281 Gitstats 7 commits Files Permalink Failedtoloadlatestcommitinformation. Type Name Latestcommitmessage Committime bin     images     src/patchcore     test     .gitignore     CODE_OF_CONDUCT.md     CONTRIBUTING.md     LICENSE     NOTICE     README.md     local_run_test.sh     pyproject.toml     requirements.txt     requirements_dev.txt     sample_evaluation.sh     sample_training.sh     setup.cfg     setup.py     tox.ini     Viewcode TowardsTotalRecallinIndustrialAnomalyDetection QuickGuide In-DepthDescription Requirements SettingupMVTecAD "Training"PatchCore EvaluatingapretrainedPatchCoremodel Expectedperformanceofpretrainedmodels Citing Security License README.md TowardsTotalRecallinIndustrialAnomalyDetection ThisrepositorycontainstheimplementationforPatchCoreasproposedinRothetal.(2021),https://arxiv.org/abs/2106.08265. Italsoprovidesvariouspretrainedmodelsthatcanachieveupto99.6%image-levelanomaly detectionAUROC,98.4%pixel-levelanomalylocalizationAUROCand>95%PROscore(althoughthe latermetricisnotincludedforlicensereasons). Forquestions&feedback,[email protected]! QuickGuide First,clonethisrepositoryandsetthePYTHONPATHenvironmentvariablewithenvPYTHONPATH=srcpythonbin/run_patchcore.py. TotrainPatchCoreonMVTecAD(asdescribedbelow),run datapath=/path_to_mvtec_folder/mvtecdatasets=('bottle''cable''capsule''carpet''grid''hazelnut' 'leather''metal_nut''pill''screw''tile''toothbrush''transistor''wood''zipper') dataset_flags=($(fordatasetin"${datasets[@]}";doecho'-d'$dataset;done)) pythonbin/run_patchcore.py--gpu0--seed0--save_patchcore_model\ --log_groupIM224_WR50_L2-3_P01_D1024-1024_PS-3_AN-1_S0--log_online--log_projectMVTecAD_Resultsresults\ patch_core-bwideresnet50-lelayer2-lelayer3--faiss_on_gpu\ --pretrain_embed_dimension1024--target_embed_dimension1024--anomaly_scorer_num_nn1--patchsize3\ sampler-p0.1approx_greedy_coresetdataset--resize256--imagesize224"${dataset_flags[@]}"mvtec$datapath whichrunsPatchCoreonMVTecimagesofsizes224x224usingaWideResNet50-backbonepretrainedon ImageNet.Forothersamplerunswithdifferentbackbones,largerimagesorensembles,see sample_training.sh. GivenapretrainedPatchCoremodel(ormodelsforallMVTecADsubdatasets),thesecanbeevaluatedusing datapath=/path_to_mvtec_folder/mvtec loadpath=/path_to_pretrained_patchcores_models modelfolder=IM224_WR50_L2-3_P001_D1024-1024_PS-3_AN-1_S0 savefolder=evaluated_results'/'$modelfolder datasets=('bottle''cable''capsule''carpet''grid''hazelnut''leather''metal_nut''pill''screw''tile''toothbrush''transistor''wood''zipper') dataset_flags=($(fordatasetin"${datasets[@]}";doecho'-d'$dataset;done)) model_flags=($(fordatasetin"${datasets[@]}";doecho'-p'$loadpath'/'$modelfolder'/models/mvtec_'$dataset;done)) pythonbin/load_and_evaluate_patchcore.py--gpu0--seed0$savefolder\ patch_core_loader"${model_flags[@]}"--faiss_on_gpu\ dataset--resize366--imagesize320"${dataset_flags[@]}"mvtec$datapath AsetofpretrainedPatchCoresarehostedhere:addlink.Tousethem(andreplicatetraining), checkoutsample_evaluation.shandsample_training.sh. In-DepthDescription Requirements OurresultswerecomputedusingPython3.8,withpackagesandrespectiveversionnotedin requirements.txt.Ingeneral,themajorityofexperimentsshouldnotexceed11GBofGPUmemory; howeverusingsignificantlylargeinputimageswillincurhighermemorycost. SettingupMVTecAD TosetupthemainMVTecADbenchmark,downloaditfromhere:https://www.mvtec.com/company/research/datasets/mvtec-ad. Placeitinsomelocationdatapath.Makesurethatitfollowsthefollowingdatatree: mvtec |--bottle |-----|-----ground_truth |-----|-----test |-----|--------|------good |-----|--------|------broken_large |-----|--------|------... |-----|-----train |-----|--------|------good |--cable |--... containingintotal15subdatasets:bottle,cable,capsule,carpet,grid,hazelnut, leather,metal_nut,pill,screw,tile,toothbrush,transistor,wood,zipper. "Training"PatchCore PatchCoreextractsa(coreset-subsampled)memoryofpretrained,locallyaggregatedtrainingpatchfeatures: Todoso,wehaveprovidedbin/run_patchcore.py,whichusesclicktomanageandaggregateinput arguments.Thislookssomethinglike pythonbin/run_patchcore.py\ --gpu--seed#SetGPU-id&reproducibilityseed. --save_patchcore_model#Ifset,savesthepatchcoremodel(s). --log_online#Ifset,logsresultstoaWeights&Biasesaccount. --log_groupIM224_WR50_L2-3_P01_D1024-1024_PS-3_AN-1_S0--log_projectMVTecAD_Resultsresults#Loggingdetails:Nameoftherun&Nameoftheoverallprojectfolder. patch_core#WenowpassallPatchCore-relatedparameters. -bwideresnet50#Whichbackbonetouse. -lelayer2-lelayer3#Whichlayerstoextractfeaturesfrom. --faiss_on_gpu#Ifsimilarity-searchesshouldbeperformedonGPU. --pretrain_embed_dimension1024--target_embed_dimension1024#Dimensionalityoffeaturesextractedfrombackbonelayer(s)andfinalaggregatedPatchCoreDimensionality --anomaly_scorer_num_nn1--patchsize3#Num.nearestneighbourstouseforanomalydetection&neighbourhoodsizeforlocalaggregation. sampler#Wenowpassallthe(Coreset-)subsamplingparameters. -p0.1approx_greedy_coreset#Subsamplingpercentage&exactsubsamplingmethod. dataset#WenowpassalltheDataset-relevantparameters. --resize256--imagesize224"${dataset_flags[@]}"mvtec$datapath#Initialresizingshapeandfinalimagesize(centercropped)aswellastheMVTecsubdatasetstouse. Notethatsample_runs.shcontainsexemplarytrainingrunstoachievestrongADperformance.Dueto repositorychanges(&hardwaredifferences),resultsmaydeviateslightlyfromthosereportedinthe paper,butshouldgenerallybeverycloseorevenbetter.Asmentionedpreviously,forre-useand replicabilitywehavealsoprovidedseveralpretrainedPatchCoremodelshostedataddlink- downloadthefolder,extract,andpassthemodelofyourchoiceto bin/load_and_evaluate_patchcore.pywhichshowcasesanexemplaryevaluationprocess. During(after)training,thefollowinginformationwillbestored: |PatchCoremodel(if--save_patchcore_modelisset) |--models |-----|-----mvtec_bottle |-----|-----------|-------nnscorer_search_index.faiss |-----|-----------|-------patchcore_params.pkl |-----|-----mvtec_cable |-----|-----... |--results.csv#Containsperformanceforeachsubdataset. |Sample_segmentations(if--save_segmentation_imagesisset) Inadditiontothemaintrainingprocess,wehavealsoincludedWeights-&-Biaseslogging,which allowsyoutologalltraining&testperformancesonlinetoWeights-and-Biasesservers (https://wandb.ai).Tousethat,includethe--log_onlineflagandprovideyourW&Bkeyin run_patchcore.py>--log_wandb_key. Finally,duetotheeffectivenessandefficiencyofPatchCore,wealsoincorporatetheoptiontouse anensembleofbackbonenetworksandnetworkfeaturemaps.Forthis,providethelistofbackbonesto use(aslistedin/src/anomaly_detection/backbones.py)with-b.Anexamplewiththreedifferent backboneswouldlooksomethinglike pythonbin/run_patchcore.py--gpu--seed--save_patchcore_model--log_group--log_online--log_projectresults\ patch_core-bwideresnet101-bresnext101-bdensenet201-le0.layer2-le0.layer3-le1.layer2-le1.layer3-le2.features.denseblock2-le2.features.denseblock3--faiss_on_gpu\ --pretrain_embed_dimension1024--target_embed_dimension384--anomaly_scorer_num_nn1--patchsize3sampler-p0.01approx_greedy_coresetdataset--resize256--imagesize224"${dataset_flags[@]}"mvtec$datapath Whenusing--save_patchcore_model,inthecaseofensembles,arespectiveensembleofPatchCoreparametersisstored. EvaluatingapretrainedPatchCoremodel Toevaluatea/ourpretrainedPatchCoremodel(s),run pythonbin/load_and_evaluate_patchcore.py--gpu--seed$savefolder\ patch_core_loader"${model_flags[@]}"--faiss_on_gpu\ dataset--resize366--imagesize320"${dataset_flags[@]}"mvtec$datapath assumingyourpretrainedmodellocationstobecontainedinmodel_flags;oneforeachsubdataset indataset_flags.Resultswillthenbestoredinsavefolder.Examplemodel&datasetflags: model_flags=('-p','path_to_mvtec_bottle_patchcore_model','-p','path_to_mvtec_cable_patchcore_model',...) dataset_flags=('-d','bottle','-d','cable',...) Expectedperformanceofpretrainedmodels Whiletheremaybeminorchangesinperformanceduetosoftware&hardwaredifferences,theprovided pretrainedmodelsshouldachievetheperformancesprovidedintheirrespectiveresults.csv-files. Themeanperformance(particularlyforthebaselineWR50aswellasthelargerEnsemblemodel) shouldlooksomethinglike: Model MeanAUROC MeanSeg.AUROC MeanPRO WR50-baseline 99.2% 98.1% 94.4% Ensemble 99.6% 98.2% 94.9% Citing Ifyouusethecodeinthisrepository,pleasecite @misc{roth2021total, title={TowardsTotalRecallinIndustrialAnomalyDetection}, author={KarstenRothandLathaPemulaandJoaquinZepedaandBernhardSchölkopfandThomasBroxandPeterGehler}, year={2021}, eprint={2106.08265}, archivePrefix={arXiv}, primaryClass={cs.CV} } Security SeeCONTRIBUTINGformoreinformation. License ThisprojectislicensedundertheApache-2.0License. About Nodescription,website,ortopicsprovided. Resources Readme License Apache-2.0license Codeofconduct Codeofconduct Stars 143 stars Watchers 8 watching Forks 20 forks Releases Noreleasespublished Packages0 Nopackagespublished Contributors3 petergehler PeterGehler Cveinnt Vincent amazon-auto AmazonGitHubAutomation Languages Python 93.9% Shell 6.1% Youcan’tperformthatactionatthistime. Yousignedinwithanothertaborwindow.Reloadtorefreshyoursession. Yousignedoutinanothertaborwindow.Reloadtorefreshyoursession.



請為這篇文章評分?