Structured Light 3D Scanning Instructable online.

Kyle McDonald has created an Instructable showing how to achieve structured light 3D scanning at home. This technique originated in computer vision literature. The basic idea is that by projecting a known pattern on a surface, it is possible to decode the shape of that surface. Kyle has taken this to a new aesthetic level by developing some very beautiful point cloud renderings from the resulting data:

Point Clouds with Depth of Field from Kyle McDonald on Vimeo.

Kyle’s efforts are related to Radiohead’s House of Cards video, which used laser scanning to achieve the same effect.

How is this related to the future of photography? Well, first of all, it demonstrates the application of photography-like effects to 3D data, with beautiful results. But more importantly, it shows one of the possible effects of augmenting the flash, which is still a relatively “dumb” part of a camera. Smart flashes multiplied by smart optics will equal new modes of capture and expression.

Posted in ComputationalPhotography | 1 Comment

Welcome!

Hey all, thanks for visiting. We’re a bit unprepared for all the traffic and attention. Sorry if things are a mess or not explained clearly enough!

Here’s the executive summary: There’s a new field of research happening right now. It’s called “Computational Photography”. A bunch of labs and a few independents are working to build cameras of the future. These cameras allow the Depth of Field, the object in focus, and the position of the camera to be modified after the picture is taken. None of those things are possible with a traditional camera. What we’re showing here is just one such camera. It is not the only kind of computational camera, and we don’t actually expect that people will haul around 4 foot arrays ten years from now.

So what do computational cameras look like? Check out these ones at MIT, or these ones by Shree Nayar to get a start. We modeled ours after this incredible one at Stanford.

The goal here is to make an affordable, accessible camera system to better understand computational photography and to experiment with these techniques. We bought the cameras from eBay, broken, and repaired them. Cost per camera was about $30. We also show a way to play with these techniques using just one camera. Don’t count on that Instructable staying up, though, Instructables has been a bit un-accepting of what we’ve been posting there for whatever reason.

We’re aware that the GIFs look really bad. Here are some links to higher-resolution images if you don’t want to install our software and play with the included images (or your own!). Although the images show artifacts, remember, the real power is that this “focusing” was done after the picture was taken and that the virtual aperture is several feet wide.

Foreground:

Middleground:

Background:

Far distance:

Skater Near
Skater Far

Garden Near
Garden Far

Tree Near
Tree Far

Posted in Uncategorized | 4 Comments

A Future Picture Tutorial, Parts 1 and 2.

Matti and I just published two tutorials on Instructables.

The first introduces the field of Computational Photography, and motivates the project.

The second shows you how to simulate our Large Light Field Camera Array with a single camera

ANGULAR.

Enjoi.

Posted in ComputationalPhotography, LLFC, Photography | 5 Comments

FuturePicture: The Large Light Field Camera Array, Part 1.

FuturePicture is about the future of photography. It is about cameras with capabilities that sound like science fiction, and look like a million bucks.

So you want to influence the future of photography? Well, you gotta build a camera, ’cause this future isn’t for sale, yet.

And that’s exactly what Matti and I did. Twice.

First Large Light Field Camera Array:

Second Large Light Field Camera Array:

Computational cameras have only come into being over the last two decades. Why just now? Well, cheap computation, plentiful sensors, and a hundred-fifty years of relative design stagnation explain some of it. Computational photography is a young field, still deciding what it is and what it is doing, exactly, but the undeniable common factor is that a powerful camera is involved. This “camera” could look perfectly ordinary or be completely unrecognizable, understandable only by analogy, from a fly’s eye to the photosensitive spots on nematodes. Computational photography seeks inspiration from disparate sources: biology, computer vision, optics, and statistics. The price of admission is math prowess, some computer programming power, and a camera. Or twelve.

Well, together we (Daniel Reetz and Matti Kariluoma), have that covered. We aim to take computational photography out of the lab, and into practical use. We want to make the hardware affordable and accessible, because outside the ivory towers of academia, there are creative people of all stripes who could use amd abuse this kind of photographic power.

So, what does this thing do? The primary function of this array is to capture the Light Field, a four-dimensional function that is capable of describing all rays in a scene. Surrounding you, now, and always, is a reverberating volume of light. Just as sound echoes around a room in complex ways, bouncing from every surface, so does light, creating a structured volume. Traditional, single-lens cameras project this three dimensional world of reflected light onto a two dimensional sensor, tossing out the 3D information in the process, and capturing only a faint, sheared sliver of the actual light field. By taking many captures at slightly shifted locations, it is possible to capture a crude representation of the light field. The number of slices determines the resolution of capture; our 12 captures at 7cm separation is a bare minimum. What can you do with a light field? The lowest hanging fruit is computational refocusing. By computational refocusing, we mean focusing the image AFTER it is captured.

The particular method of computational refocusing that we employ creates an enormous virtual aperture. The size of the virtual aperture determines a few things. One, the aize of the object you can “see through”. Two, the depth of the focal plane, which is currently extremely shallow, on the order of a few centimeters at most. In this image, we can see right through Poodus as he flies through the air.

Camera array construction and software will be the topic of another post; this post is just to introduce our work on the array and make public some of its output. A brief summary: we employ the latest modern rapid prototyping equipment — laser cutters, flatbed scanners, digital micrometers, and open source hardware and software — Arduino and StereoDataMaker. All the technology we develop will be released under open-source licenses to encourage, as much as possible, the development of similar camera arrays and to speed the hobbyist adoption of computational photography techniques.

A brief introduction: Daniel Reetz is an artist, camera hacker, and graduate student in the visual neurosciences. Matti Kariluoma is a CS/Math major with a focus on artificial intelligence. Together, we’re working on computational photography, and we’re going to bring our respective backgrounds to bear on it. Want to get in touch? Leave a comment here.

Posted in ComputationalPhotography, LLFC | 12 Comments

P.P. Sokolov’s Historical Work on Light Field Photography/Integral Imaging.

Futurepicture is proud to present its first contribution to the field of Computational Photography: The translation of Sokolov’s seminal “Autostereoscopy and Integral Photography by Professor Lippman’s Method“.

In 1908, Nobel Prize laureate Gabriel Lippmann proposed a new kind of camera — one with many lenses, which would capture angular information. These “Integral Photographs”, so named because they represent the sum of many small photographs taken with many small lenses, represent the basis for many prototype computational cameras that have only come into existence since the advent of cheap digital cameras and plentiful computing power.

Sokolov, a Russian, published a thorough mathematical investigation of the ideas Lippmann set into motion in the journal “Журнал Общества любителей естествознания”. His paper, under the guise of “autostereoscopic” photography, was really a paper about what we now understand to be pinhole lightfield photography. He derived the lens curvature equation, investigated various types of optics, and implemented what might be the first pinole based light field capture system. He also roughly thought out the relationship between angular samples and a complete 3-D impression of a scene, estimating that 1/5mm resolution ought to be good enough.

The camera system he manufactured to test his work is impressive, especially given the technology available when it was created. A copper and cardboard plate were engraved with 1200 conical pinholes, and the plates applied to a photographic emulsion. After the emulsion was exposed and developed, it was backlit, and through the pinholes, a 3D image was visible, and the experiment was a success.

The primary source of citations for Sokolov’s work came from Dudnikov, whose work deserves a post of its own. Unfortunately, much of the excellent work from Soviet and pre-Soviet times was and is still unavailable to Western audiences, due to geography, language, political factors, and time. However, now, in the digital age, these excuses are no longer acceptable. We have translation engines, the internet, and the days of Cold War secrecy are over. But what is further unacceptable is that some authors in the field have carelessly transposed P.P. Sokolov (his first name and patronymic are yet unknown), to A.P. Sololov, which was at one time the first Google result for P.P. Sokolov. A.P. Sokolov is a name that belongs to several Russian physicists, not the man who did the work some hundred odd years ago.

Ekaterina Avramova requested the article through the library at IATE (sometimes known as Обнинский институт атомной энергетики – филиал Федерального государственного бюджетного образовательного учреждения высшего профессионального образования «Национальный исследовательский ядерный университет «МИФИ»). Originally, the request was sent on to the Central Scientific Library, but their archives have been closed because of the desirability of their building in the very heart of Moscow (Кузнецкий мост) and now the library contents were being (are being) moved… somewhere. The order was forwarded to the Lenin Library, which delayed receipt of the article. The former Lenin Library is now called РГБ – Российская государственная библиотека.

This is a proper reference in their terms:
П.П. Соколов – Автостереоскопия и интегральная фотография по пр. Липману // Журнал Общества любителей естествознания.- 1911 – Т. 123 – с.23 (Изд-во МГУ-ПРЕСС)

The primary translation work was done by Ekaterina Avramova and the editing was done by Daniel Reetz, founder of FuturePicture. Together, we hope that you really enjoy this paper and the historical background it so beautifully illustrates, and we also hope that, with time, the record on Sokolov’s work is set straight.

Please direct any questions or comments to Daniel Reetz. d a n r e e t z ]a t[ g m a i l ]d o t[ c o m

Posted in History | Leave a comment

Feynman: The Tremendous Mess.

Generally, “camera” refers to an optical picture-taker. But “cameras” need not be limited to what we can see.

Posted in Photography | 1 Comment

A Box With A Hole In It.

In a hundred fifty years, the photographic apparatus has barely changed shape.

The camera remains a box with a hole in it.

But this box has been busy. Nearly every surface, every display, every inch of public space is covered in photographic imagery, some of it moving. Even our trash is littered with it. Photographic imagery covers the world in the Sherwin Williams sense. It’s no surprise, then, in a lifetime of consuming and producing that we find photographic imagery instantly understandable, totally persuasive, and most seductively, representative of the world existing outside the box.

But familiarity breeds contempt. What does a camera leave out? Where does a camera simplify? Are your flower pictures just a garden path?

One of the many answers is technical. The design of the camera constrains its output. The constraints are manifold. The first cameras couldn’t even focus. But what are current cameras lacking? Ask Gabriel Lippmann, Nobel Laureate and inventor of color film:

The current most perfect photographic print only shows one aspect of reality; it reduces to a single image fixed on a plane, similar to a drawing or a hand-drawn painting. The direct view of reality offers, as we know, infinitely more variety. We see objects in space, in their true size, and with depth, not in a plane. Furthermore, their aspect changes with the location of the observer; the different layers of the view move with respect to one another; the perspective gets modified, the hidden parts do not stay the same; and finally, if the beholder looks at the exterior world through a window, he has the freedom to see the various parts of a landscape successively framed by the opening, and as a result, different objects appear to him successively. Epreuves Reversibles. Photographies Integrales,1908 (courtesy Todor Georgiev, Original translation of the 1908 article from French by Frédo Durand)

Lippmann made first mention of this fact: Cameras record only partial spatio-angular information about the world. The camera as we know it integrates angular information across each pixel. The remarkable thing is, he built a prototype camera that not only captured angular information (in other words, the direction of an incoming ray of light), but re-presented it on a piece of film.

Which is a subject for a later, more substantive post. Just setting the stage here — Welcome to FuturePicture.

Posted in Photography | 2 Comments

POST

This is a link.

This is bold.

This is italic.

TODO:

  • LaTeX installed and working CHECK
  • [math]e^{i\theta} = \cos{\theta} + i\sin{\theta}[/math] CHECK
  • Image management
  • Wiki +LaTex CHECK
  • Proper code support in WP
Posted in Uncategorized | Leave a comment