Get2D ( src, y + posy, x + posx ) over = cv. LoadImage ( "ghost.png" ) # Load an image to overlay posx = 170 # Define a point ( posx, posy ) on the source posy = 100 # image where the overlay will be placed S = ( 0.5, 0.5, 0.5, 0.5 ) # Define blending coefficients S and D D = ( 0.5, 0.5, 0.5, 0.5 ) def OverlayImage ( src, overlay, posx, posy, S, D ) : for x in range ( overlay. LoadImage ( "image.jpg" ) # Load a source image overlay = cv. Here’s my first stab at converting his function into Python:įrom cv2 import * src = cv. The end result is that the source image is modified with the overlay image blended in at (x, y). Then he simply iterates over each pixel in the source image, starting at the point where the overlay will be drawn, and blends it with the corresponding pixel of the overlay image using the blending coefficients S and D. Void OverlayImage ( IplImage * src, IplImage * overlay, CvPoint location, CvScalar S, CvScalar D ) īasically, his function takes two images (src and overlay), a point (x, y) on the source image where the overlay will be drawn, then two scalar variables that will be used for blending the images. # OverlayImage function reproduced from: I won’t reproduce his entire code, just the function that matters: Still, I was able to parse his code and convert it to Python for use in my UI’s framework. It’s written in C++, with which I have a passing familiarity, and it assumes a deeper understanding of the inner workings of OpenCV than the casual hobbyist (like myself) typically possesses. Turns out it’s actually not too bad.Īfter some googling around, I came across this article. My interface is currently Python-based with the wxWidgets and OpenCV Python wrappers, so I need a way to do this with these frameworks. That said, I’m leaning towards an interface that is simply a camera feed overlaid with data and images. So why reinvent the wheel when the best practices already exist? I figure that a lot of very smart people have spent years and millions of dollars designing systems and UIs that are simple enough for a 7 year old to master, yet robust enough for training military personnel. The ‘bot will be driven with a PS3-style controller, as I’m most comfortable and familiar with those layouts, and I’m thinking of designing a UI that bears similarity to popular driving video games (like GTA3). Img = img.convert("RGB") # Remove alpha for saving in jpg format.Now that I’ve completed the basic systems for my robot, I’m working on a user interface that will be intuitive and provide plenty of visual feedback. Img = Image.alpha_composite(img, overlay) # Alpha composite these two images together to obtain the desired result. Overlay = Image.new('RGBA', img.size, TINT_COLOR+(0,))ĭraw = ImageDraw.Draw(overlay) # Create a context for drawing things on it.ĭraw.rectangle(((llx, lly), (urx, ury)), fill=TINT_COLOR+(OPACITY,)) # semi-transparent version of the square on it. # to a fully transparent (0% opaque) version of the tint color, then draw a # Make a blank image the same size as the image for the rectangle, initialized # drawn rectangle when drawing rectangles. # Calculate upper point + 1 because second point needs to be just outside the # Determine extent of the largest possible square centered on the image. With BytesIO(urlopen(url).read()) as file: Note that it supports semi-transparent squares other than black. You can do it by creating a temporary image and using Image.alpha_composite() as shown in the code below. Sorry, the comment I made about it being a bug was incorrect, so.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |