tensorflow port
https://github.com/thtrieu/darkflow Download weights here google drive and pjreddie weights.
pjreddie author
- https://github.com/pjreddie/TopDeepLearning Various projects on deep learning neural nets.
- https://groups.google.com/forum/#!forum/darknet forum
AlexeyAB
- https://timebutt.github.io/static/how-to-train-yolov2-to-detect-custom-objects/ sites https://pjreddie.com/darknet/yolo/ training data set. from Nils Tijtgat. YOLOv2 is known to struggle when detecting small objects. The Darknet Google Groups has many different topics on how you could improve performance, you could have a look there to find inspiration. A suggestion that is often repeated is to train YOLOv2 using a higher input resolution, instead of 416x416. See this or this for instance. yolo small google groups1, yolo small 2 google groups
- https://timebutt.github.io/static/understanding-yolov2-training-output/
https://github.com/AlexeyAB/darknet#how-to-train-pascal-voc-data Fork of Yolo, download android webcam app. use android phone as network camera input stream.
bounding box
https://groups.google.com/forum/#!topic/darknet/qrcGefJ6d5g
https://gist.github.com/WillieMaddox/3b1159baecb809b5fcb3a6154bc3cb0b
Jumabek
https://github.com/Jumabek/darknet_scripts , anchors in region layer(google groups)
darknetfanz
train yolo coco data The first time I made a custom dataset that ran the 'demo' argument I changed yolo.c line 13 "char *voc_names[]=..." to reflect my custom classes. The second time I made a custom dataset, I added an argument to darknet.c "-override_vocnames" that loaded the appropriate "names=" file from the data file. ie - coco.data
- Maybe not the best way to do it. But it was easy to implement.
thtrieu
https://github.com/thtrieu/darkflow json output can be generated with descriptions of the pixel location of each bounding box and the pixel location. Each prediction is stored in the sample_img/out folder by default. An example json array is shown below.
Sai
- https://github.com/saiprabhakar/darknet-modified/tree/v0 Outputs image labels and bounding box to text file. When a person walking down the street veers unto the driveway, his position changes triggering an alert. https://groups.google.com/forum/#!topic/darknet/ylEWe3JUKrE
- https://github.com/saiprabhakar/Scene-recognition subscene analysis
- https://github.com/saiprabhakar/DeepDriving Deep driving.
Guanghan
- https://github.com/Guanghan/darknet This fork repository adds some additional niche in addition to the current darkenet from pjreddie. e.g. (1). Read a video file, process it, and output a video with boundingboxes.
- http://guanghan.info/blog/en/my-works/train-yolo/ and his SSD detector
- https://groups.google.com/forum/#!topic/darknet/cxTAbP-um7Y ,
- https://github.com/puzzledqs/BBox-Label-Tool ,
- https://github.com/Guanghan/darknet/blob/master/scripts/convert.py
I am wondering the answer of original question. Can we get coordinates and count of detected objects, as text output, in darknet?
yes you can, go to in folder src/image.c find draw_detection function, left,right,top,bot is image bounding box, names[class] is object name, you can save bounding box and object in txt and count the object
http://guanghan.info/projects/ROLO/ Rolo a fork of Yolo does realtime tracking and identification of the body parts of a human such as face, allowing the Tracked vehicle robot's PepperBall gun accurate engagement. https://github.com/Guanghan/ROLO.
Yolo python wrapper
https://github.com/IvonaTau/Python-wrapper-for-YOLO , https://groups.google.com/forum/#!topic/darknet/f-TICXNR1_E
https://github.com/thomaspark-pkj/pyyolo from python wrapper
https://pjreddie.com/darknet/yolo/
ivona
https://github.com/IvonaTau/Python-wrapper-for-YOLO
https://groups.google.com/forum/#!topic/darknet/f-TICXNR1_E
Sakmann
https://medium.com/@ksakmann/vehicle-detection-and-tracking-using-hog-features-svm-vs-yolo-73e1ccb35866 from Sakmann