Saturday, August 18, 2012

It's been a while, part deux a.k.a. research and development

It's indeed been a while, mainly because there hasn't been much of new development on the hardware side.

Or maybe just because i've been lazy...

Here's the beef. The hands-on work with the gear is on a hiatus currently, as we're in the midst of a move to new studio, and have been for the last few months. All the stuff is currently stacked in boxes in a huge pile. The new studio will be pretty cool - we will have a much bigger green screen, with two cyc walls.

While the physical gear stuff has been on a halt, i've been looking into software side of things. Not that much for the motion control stuff, but rather for a virtual studio (though they're related).

First of all, i've learned a bit of c++. I'm no guru, but i've gotten my first small programs to compile successfully. It's a start.

First dabblings...


Virtual studio needs a few building blocks (apart from the obvious green screen, camera etc. hardware) - a block to tell where the camera is, a block to get the details of it's lens, a block to ingest the video, a block to create the 3D background images, a block to remove the greenscreen and composite the talent to the BG and finally the code to tie all these together to an user friendly application.

A lot of the stuff needed is open source, and could probably be hacked together with some glue code. So far so good.

I think the most complicated issue is to accurately retrieve the camera position and rotation.

I've toyed with the idea of attaching a Kinect sensor to the main camera. Some source code for Kinect Fusion like stuff is available, but incorporating it isn't necessarily too easy, to say the least. I already got the Kinect, so i will play around with this stuff for sure.

Another route i have considered is using a 9DOF IMU - there's some Arduino compatible ones available, they are plenty good enough i.e. for UAV drones, but i'm not sure whether they are stable and accurate enough for high precision tracking that's needed for the camera. I'll probably buy one and test it in practice, as well as the cheap PS Move controllers - which work i.e. with LightWave3D's virtual studio tools (which i definitely need to look more into too, by the way).

Optimum case would probably be something like what iPisoft does for mocap, tracking the production camera's position using multiple cheap cameras (namely PS eye), which is similar to what many established virtual set manufacturers do already. Unfortunately, this all is commercial software and thus there's no way to get the source, and i've had a bit of a hard time finding even whitepapers on the subject. I / we will very likely purchase iPisoft at some point though, to enable markerless motion capture of actors in our studio - a topic definitely worthy of it's own post, but that's for later.

Although OpenCV graphics library includes some camera tracking algorithms, they are not readily suitable for this task, as far as i can see. But it can probably be used for getting the video frames and processing them on the GPU - Open CV has bits for i.e. calibrating the lenses etc. which should come in very handy. I'll probably need to write my own keyer, and so far i've already written a rudimentary one that works in real time, at least using a webcam. I've earlier done a more complex keyer for After Effects using Pixel Bender language (shame on Adobe that it's discontinued in CS6).



While all this happens to the video, the camera's parameters are sent to a 3D engine that will render out the virtual scene from the real camera's point of view. The most likely candidate for this might be Ogre, which is open source.

The last bit needed is the glue code to make this into an actual application (or a suite of applications).

How hard can it be?

Pretty hard i'm afraid, but hey, some others have succeeded in doing that stuff, so even if i don't, it may be an interesting ride to try ;-)

No comments:

Post a Comment