Monday, August 27, 2012

D5100 @ shrubbery

I got a new camera this summer, Nikon D5100. Here's a test video i did last weekend... I'll just copy the text from video description here, lazy me...



Some tests with my new Nikon D5100 (and Tamron 11-18 mm wide zoom), first outdoor footage. I tried to press the camera a little, with high contrast / high detail scenes.

I also have Nikon D90, which is a nice still camera but the video quality is pretty appalling. As i have some decent Nikon glass, when i wanted a DSLR with usable video quality, i went for yet another Nikon body, D5100.

The overall impression is that D5100 is less serious a camera, lighter, feels more plastic, has way poorer battery life etc. For stills, i prefer D90, even if it's older and has less resolution etc. D5100 makes for a decent 2nd body, and sometimes the extra resolution can be useful. And the adjustable LCD is a very nice feature.

As far as the video quality goes, the resolution is decent, even though it clearly resolves less than the full 1920x1080. A huge leap from D90. The camera handles highlights and overexposure pretty decently and i like the colors overall. There's a shot at the end where i adjust the exposure from over to under, this should give an idea how the camera handles these. The camera unfortunately has rather severe moiré on fine details, seen i.e.in the ocean surface as crawling blue-red artifacts. These are partially hidden by Youtube compression and look worse on the camera originals.

The other thing i tried out was Tamron 11-18 mm wide zoom. It's fun beast, 11 mm is pretty darn wide but it still handles the geometry pretty well, without visible distortion. There are a few shots at the end of the clip where i walk around with the wide lens camera handheld (with the tripod attached to give it a little more mass for stabilization). Pretty wild even with such simple moves.

Camera settings: Picture control Neutral; Sharpening 0; Contrast -1; Brightness +; Saturation 0; hue 0; ISO 100; Auto control ON; Max. sensitivity ISO 1600; Min. shutter speed 1/50; Aperture priority mode.

With these settings, i get semi-manual operation. I first choose the aperture. When i switch to live view camera chooses the rest of the exposure parameters automatically, but they can be adjusted with the +/- setting to arrive at the desired value for ISO and shutter speed.

The camera will use ISO 100 with the current f-stop, and the +/- setting varies the shutter speed. If the shutter speed hits the threshold value of 1/50 when increasing exposure, the +/- setting will then adjust the ISO until the second threshold of ISO 1600 is reached. After that, the +/- will again adjust the shutter speed.

In a vast majority of cases, the result is the same i would have used with full manual control. When i hit the exposure combination i like, i just hit the AE-L button to lock the settings. This is a bit flimsy and backward way for adjusting the exposure, but it still works well enough.

Overall, i'm pretty happy with the purchase. This is not the best video DSLR, but it's pretty affordable, offers semi-decent image quality and most importantly to me, i can use my existing Nikon glass with it.


Friday, August 24, 2012

The Pi is in the closet

I've totally forgotten to mention Raspberry Pi, a barebones Linux computer board that is about the size of an Arduino board (smaller than a pack of cigarettes), costs a few dozen bucks, and comes with decent connections, including HDMI etc.

It has a decent amount of computing power (similar to a modern cell phone AFAIK), so it can even process live HD video. Could be pretty cool alternative for the brains of Moco, or i.e. a standalone camera tracker together with an Arduino / IMU, for example.

It's rather nifty i must say, and i have one in my closet. Still unopened, but i'm eagerly waiting to play with it...

Saturday, August 18, 2012

It's been a while, part deux a.k.a. research and development

It's indeed been a while, mainly because there hasn't been much of new development on the hardware side.

Or maybe just because i've been lazy...

Here's the beef. The hands-on work with the gear is on a hiatus currently, as we're in the midst of a move to new studio, and have been for the last few months. All the stuff is currently stacked in boxes in a huge pile. The new studio will be pretty cool - we will have a much bigger green screen, with two cyc walls.

While the physical gear stuff has been on a halt, i've been looking into software side of things. Not that much for the motion control stuff, but rather for a virtual studio (though they're related).

First of all, i've learned a bit of c++. I'm no guru, but i've gotten my first small programs to compile successfully. It's a start.

First dabblings...


Virtual studio needs a few building blocks (apart from the obvious green screen, camera etc. hardware) - a block to tell where the camera is, a block to get the details of it's lens, a block to ingest the video, a block to create the 3D background images, a block to remove the greenscreen and composite the talent to the BG and finally the code to tie all these together to an user friendly application.

A lot of the stuff needed is open source, and could probably be hacked together with some glue code. So far so good.

I think the most complicated issue is to accurately retrieve the camera position and rotation.

I've toyed with the idea of attaching a Kinect sensor to the main camera. Some source code for Kinect Fusion like stuff is available, but incorporating it isn't necessarily too easy, to say the least. I already got the Kinect, so i will play around with this stuff for sure.

Another route i have considered is using a 9DOF IMU - there's some Arduino compatible ones available, they are plenty good enough i.e. for UAV drones, but i'm not sure whether they are stable and accurate enough for high precision tracking that's needed for the camera. I'll probably buy one and test it in practice, as well as the cheap PS Move controllers - which work i.e. with LightWave3D's virtual studio tools (which i definitely need to look more into too, by the way).

Optimum case would probably be something like what iPisoft does for mocap, tracking the production camera's position using multiple cheap cameras (namely PS eye), which is similar to what many established virtual set manufacturers do already. Unfortunately, this all is commercial software and thus there's no way to get the source, and i've had a bit of a hard time finding even whitepapers on the subject. I / we will very likely purchase iPisoft at some point though, to enable markerless motion capture of actors in our studio - a topic definitely worthy of it's own post, but that's for later.

Although OpenCV graphics library includes some camera tracking algorithms, they are not readily suitable for this task, as far as i can see. But it can probably be used for getting the video frames and processing them on the GPU - Open CV has bits for i.e. calibrating the lenses etc. which should come in very handy. I'll probably need to write my own keyer, and so far i've already written a rudimentary one that works in real time, at least using a webcam. I've earlier done a more complex keyer for After Effects using Pixel Bender language (shame on Adobe that it's discontinued in CS6).



While all this happens to the video, the camera's parameters are sent to a 3D engine that will render out the virtual scene from the real camera's point of view. The most likely candidate for this might be Ogre, which is open source.

The last bit needed is the glue code to make this into an actual application (or a suite of applications).

How hard can it be?

Pretty hard i'm afraid, but hey, some others have succeeded in doing that stuff, so even if i don't, it may be an interesting ride to try ;-)