1/24/2024 0 Comments Scannerz cardsImport requests import sys import os import time import boto3 #Define our global variables #This argument is defined when we run our script. I hope this inspires you to bust out those old cards and do something with them! I plan on doing this again with sports cards and various other sets. Keep in mind that there is an application process for the API. *Edit 05/27/18: I've updated the Rekognition script to run the detected text against the TGCplayer's API in real-time (and writing to a file). In the end, I had about $275 worth of commons, uncommons, and rares! (I removed all of the cards I knew were already worth money) Sweet! After that, I wrote a quick python script to hit TCGplayer's API for the market price of the cards*. 100 were off by more than one letter (10.9%).The other two main issues I ran into were (1) the font - many of the fonts had characters that were deceptively close to where even I had a hard time deciphering them and (2) lighting. I have no doubt this would've been much higher if the pictures had been of better quality. Here were the results of the image processing. We can automate this process! Once all of the cards have been uploaded to the bucket, we can run the code below to output our Detected Text into a csv: python Rekognize_S3.py You'll notice that it's incredibly accurate, despite any issues with the photo. The screenshots below show the demo version of Rekognition processing some of the pictures taken. The s3 bucket was setup exactly like the current directory - /set_name/file.jpg. I uploaded the S3 files manually for the sake of time (not shown but here's a guide ). Amazon is pretty generous with their AWS Free Tier - you can process 5,000 pictures per month under it. You'll need an AWS account in order to do this which is free. It allowed a lot of flexibility for positioning, lighting, distance, etc. While both are amazing tools, Rekognition proved much easier to use. I tried doing OCR with tesseract and OpenCV. jpg's for the "M13" set were written to the path. This helps us stay organized for both image processing and the pricing API e.g. The was a three letter code for the card set. I was able to do about 20-25 cards a minute. Once it's done, we can quit the script and load more. This will start the servos and scan the cards. Once we have our cards loaded onto the Lego platform, we can simply do: python mtg_servo.py One script is for powering the servos and taking the picture the other is for processing the pictures stored in S3 against Rekognition. The code is written entirely in python 2.7. I have a 5V power supply connected to the breadboard - not mandatory but helpful. The other things we need are two servo motors and a camera. Raspberry Pi was the best choice for this project as I was going to need to run python for the peripherals. ![]() This happened from time to time but it was trivial to remove the blank pictures. Full disclosure - you'll notice in the first video that a picture was taken when the card wasn't in position. I also used a few cards taped together to keep enough weight on the cards to ensure that only one came out. There is just enough room to push one card out at a time. ![]() The wheel at the front hanging out from the dark green piece is for keeping the other cards from slipping out. The servo in the back is able to continuously spin and moves the tires forward in a simple cog-like setup. This project won't show how to build it brick by brick but there should be enough pictures here to be able to replicate it or make it even better! The design was inspired by a cheap $7 card sorter I got years ago. I challenged myself to stick to this box only - no other support hence why this thing looks bare bones. Instead, I decided to use Lego so I bought a medium bin you can get at several retailers. I'm not good at wood working and I thought that it might be rough on the cards.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |