
Android App Bug Fix - Video Overlays (OpenGL Compositing)
- or -
Post a project like this£100(approx. $133)
- Posted:
- Proposals: 16
- Remote
- #4354486
- Expired
Website Developer and Designer || Email Marketing Specialist || Virtual Asistant
UK based - Top 20 on PPH. AI |Mobile & Web Apps | AI & ML | Website | CRM/CMS

1221857711520637122300341223008112233416122299261222801512227952121391852279603122341283006239





Description
Experience Level: Entry
I've been deveoping an android app (using ChatGPT - I have no real android development knowledge!) which animates a dog's mouth through the camera feed. I had the app working okay but with some latency between the camera feed and the overlay (using CameraX and a PreviewView), so I tried to update it to an offscreen rendering approach, using an EGL context and MediaCodec surface to render the camera feed plus overlays into a single composited output.
However, the I have not implemented this correclty and the camera feed is just showing a black screen.
Here's ChatGPT's summary of what we had originally (working) and what we changed it to. Both of these approaches are saved on different github branches which you will have access to. You can see the previously working version on the master branch, and the newly updated (not working) version on a test branch.
What We Had Originally
- An Android app (Kotlin) using CameraX and a PreviewView to display the live camera feed.
- A BoundingBoxOverlay custom View drawn on top for YOLO pose detection (dog’s mouth, eyes, etc.).
- We wanted to animate the dog’s mouth in sync with audio but were limited by the traditional overlay.
What We Changed It To
Offscreen Rendering Approach:
- Created an EGL context and MediaCodec surface (via OffscreenRenderer.kt) to render the camera feed plus overlays into a single composited output.
- The camera feed is now provided by a SurfaceTexture created from an external OES texture (rather than using PreviewView).
- CameraX is configured to output frames to that SurfaceTexture.
- A custom pipeline composites the camera and any overlays (animated mouth, etc.) in OpenGL, then encodes to a video file with MediaCodec.
What We’re Trying to Achieve
A Snapchat-like pipeline:
- Grab the camera feed directly via a SurfaceTexture (external OES texture).
- Apply real-time overlays/filters (like mouth animation, hats, glasses) in OpenGL.
- Composite everything offscreen and encode it into a video file (for recording).
- Avoid the performance constraints of standard Android Views by relying on OpenGL compositing (similar to Snapchat filters).
Despite these changes, the feed currently displays a black screen. We suspect an issue with the GL pipeline, resolution mismatch, or camera frames not drawing as intended.
Goal: Fix the offscreen renderer so that camera frames plus overlays appear in real time, and video recording also works as expected.
However, the I have not implemented this correclty and the camera feed is just showing a black screen.
Here's ChatGPT's summary of what we had originally (working) and what we changed it to. Both of these approaches are saved on different github branches which you will have access to. You can see the previously working version on the master branch, and the newly updated (not working) version on a test branch.
What We Had Originally
- An Android app (Kotlin) using CameraX and a PreviewView to display the live camera feed.
- A BoundingBoxOverlay custom View drawn on top for YOLO pose detection (dog’s mouth, eyes, etc.).
- We wanted to animate the dog’s mouth in sync with audio but were limited by the traditional overlay.
What We Changed It To
Offscreen Rendering Approach:
- Created an EGL context and MediaCodec surface (via OffscreenRenderer.kt) to render the camera feed plus overlays into a single composited output.
- The camera feed is now provided by a SurfaceTexture created from an external OES texture (rather than using PreviewView).
- CameraX is configured to output frames to that SurfaceTexture.
- A custom pipeline composites the camera and any overlays (animated mouth, etc.) in OpenGL, then encodes to a video file with MediaCodec.
What We’re Trying to Achieve
A Snapchat-like pipeline:
- Grab the camera feed directly via a SurfaceTexture (external OES texture).
- Apply real-time overlays/filters (like mouth animation, hats, glasses) in OpenGL.
- Composite everything offscreen and encode it into a video file (for recording).
- Avoid the performance constraints of standard Android Views by relying on OpenGL compositing (similar to Snapchat filters).
Despite these changes, the feed currently displays a black screen. We suspect an issue with the GL pipeline, resolution mismatch, or camera frames not drawing as intended.
Goal: Fix the offscreen renderer so that camera frames plus overlays appear in real time, and video recording also works as expected.

Azz F.
100% (24)Projects Completed
14
Freelancers worked with
13
Projects awarded
52%
Last project
27 Oct 2018
United Kingdom
New Proposal
Login to your account and send a proposal now to get this project.
Log inClarification Board Ask a Question
-
There are no clarification messages.
We collect cookies to enable the proper functioning and security of our website, and to enhance your experience. By clicking on 'Accept All Cookies', you consent to the use of these cookies. You can change your 'Cookies Settings' at any time. For more information, please read ourCookie Policy
Cookie Settings
Accept All Cookies