Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

First Epic model could shoot 3D by itself

I thin this discussion should end now as robert doesn't know the huge job that is involved and people will start getting wrong impressions from the thread wthout reading it properly. I think that anyone who posts this:
I am not a WGOA member. I won't give you a script but I will give an outline to any reputable producer, whatever you want I can give it to you in 60 days or less. All I ask for once you receive it, all I ask for in payment upon delivery is this: http://www.bhphotovideo.com/c/product/597424-REG/Canon_3686B001_VIXIA_HV40_High_Definition.html

In the long term, all I want is 0.5% of whatever it makes and it is to be held until it reaches a certain amount.

I can do it call, comedy, romance, action, adventure, horror, and Sci-Fi.

All you need to do is to give me a genre and a logline to work with.

I'll update this add when I get a good offer.

Since this is reduser.net, you can also offer to give me a Redone as payment after turning in a finished script if you want that.

Should be taken with a pinch of salt to be honest.
 
I'm sorry Robert but this just doesn't make sense to me, maybe I'm not getting it.

To have a stereo image, weather it's "true stereo" or flat cut outs, you need to have 2 slightly different perspectives. The method you've shown has no depth to it, everything is on the same "plane".

I know Alice/Clash were converted to 3D but it wasn't done by some software or algorithm.

The process is very similar to projection mapping where make objects/meshes in a 3d program like maya/blender/3dsmax and then "project" the footage onto it. This way, when you render the finished image out in stereo you have 2 slightly different perspectives which creates the illusion of depth.

There are strict limits to this as anyone whose done camera projection knows and achieving a good finished result is very laborious and time consuming.

The difference betwee nAlice and Clash is as follows: Alice was concieved as a 3D movie to begin with. They stuck to shooting it as a single camera show because of the on set implications that Tim Burton didn't want to face coupled with Disney/Tim Burton's success with the post converted Nightmare Before Christmas re release.

Clash on the other hand wasn't originally intended as a 3D picture. I know from friends that worked on it that they handed off their comps to a company in India who rushed through all the painstaking work in a matter of mere months.

It was a last minute rush job outsourced to the lowest bidder.

Either way, both movies pale in comparison to shooting S3D on set.

Oh and cineform have great tools for S3D workflow management.

I know what Maya is, it's CGI imaging software for both animation and special effects. I wouldn't imagine it being used the 3D conversion of a 2D film. Blender and 3dsmax are also similar different peaces of software that do the same things. Well your saying that if a little more work was done, the 3D for Titans would have been a little more convincing? In any case, they'll have a chance to get it right on the sequel: http://en.wikipedia.org/wiki/Clash_of_the_Titans_(2010_film)#Sequel

So with my steps for converting the 1959 Ben-Hur to 3D, where would Maya, Blender, and 3dsmax come in?
 
I thin this discussion should end now as robert doesn't know the huge job that is involved and people will start getting wrong impressions from the thread wthout reading it properly. I think that anyone who posts this:

QUOTE=Joe_Cracker:START:
Originally Posted by Robert McGee
I am not a WGOA member. I won't give you a script but I will give an outline to any reputable producer, whatever you want I can give it to you in 60 days or less. All I ask for once you receive it, all I ask for in payment upon delivery is this: http://www.bhphotovideo.com/c/produc...efinition.html

In the long term, all I want is 0.5% of whatever it makes and it is to be held until it reaches a certain amount.

I can do it call, comedy, romance, action, adventure, horror, and Sci-Fi.

All you need to do is to give me a genre and a logline to work with.

I'll update this add when I get a good offer.

Since this is reduser.net, you can also offer to give me a Redone as payment after turning in a finished script if you want that.

:END:

Should be taken with a pinch of salt to be honest.

Listen Jay, unless you got a job offer for me, keep this thread on topic! Just incase you forgot what the topic was, it's about how you make a the RED Epic XS35 into an effective 3D camera by only using one at a time and shooting 2 images @ 4K resolution throw a single lens on a single camera.
 
I read an article in American Cinematographer discussing a 3d imax film shot by astronauts. They were trained for a few days and were sent into space with an imax camera that had two lenses on it. Each lens recording to either the left or right side of one piece of film.

Perhaps such a mount could be used on an epic and you could shoot 2k on either side of the sensor and then put them together in post.

Does this make sense?
 
But the issue is the example you gave is not an S3D image. Everything is on the same focal plane, there is no parallax/convergence, which is gained by shooting with 2 cameras.
Explain to us how you would gain a genuine S3d image with selectable parralax/convergence because I have no clue how it could be done with the method your suggesting.
 
Naa, why can't they just link two Red Epic cameras together and sell them with a lower price tag than two cameras separately?

Maybe something like: price(Red Epic 3D) = 1.5*price(Red Epic).

Just my two euro cents...
 
Last edited:
I'm scratching my head here... I don't understand what we're trying to discuss.... I think I do, but I'm not sure what point Robert was trying to make. The original image showed us nothing. As he said, he just put a 3D effect on it by separating the red/blue to resemble an anaglyphic image. But it's not one. So I don't understand the point or the context here.

Yes, a single camera can be used to shoot 3D with the proper optical set-up. For example, you can use a beam splitter that alternates between two separate lenses and then shoot at double your intended frame rate. So what we end up with are left and right eye images that are slightly different in time. Ironically, this is actually a better timing approach for delivery systems using shutter glasses. And yes, this is a technique that has been used.

3D imaging of two lens sources can also be done on the same camera by recording side by side stereo images onto the same sensor at the same time. 2X anamorphic glass in a dual-lens prism system onto one camera could work on a 4K / 5K sensor could be an interesting solution. I don't know if anyone has ever tried it, but it has been discussed. Dual-lens systems for stereo use have been discussed for the EPIC 617, since it has tons of sensor real-estate and a 3:1 aspect ratio to begin with.

2D to 3D conversion is a different animal altogether. To do it RIGHT takes a lot of work and artistic ability. ...As has been pointed out by others. And it does have its limitations. As for 3D software such as Maya, MAX, Blender, and several others that are commonly used, they can have a use here. It depends on the approach being used and how far the people doing conversions wish to take certain FX. Perhaps a 2D cut-out isn't right for a character in a 3D converted film, so we build a CG replacement, constructed partially with the 2D image data of the original.

There are one-click 2D to 3D software solutions. In fact, some of the new 3D HDTVs have real-time 2D to 3D conversion built-in. So you never want to take those shitty 3D glasses off, ever!!! But in reality, it sucks. I'm so glad Panasonic is NOT putting this technology in their TVs right now.

The only RED cameras ever shown side by side in a 3D rig render have been the 2/3" Scarlet with mini-primes. RED has never shown a side by side 3D render of Epic. 2/3" Scarlet has grown in size since that render, I really don't know if it's possible now. It's hard to tell just from pictures or looking at a brain in a display case. To me, they still look a bit large for this application, but they may just be small enough to work.
 
I read an article in American Cinematographer discussing a 3d imax film shot by astronauts. They were trained for a few days and were sent into space with an imax camera that had two lenses on it. Each lens recording to either the left or right side of one piece of film.

Perhaps such a mount could be used on an epic and you could shoot 2k on either side of the sensor and then put them together in post.

Does this make sense?

Well, Michael has an example here. IMAX 3D is not two IMAX camera side by side, IMAX 3D is actually shot with a single camera that holds two 65mm reels. From there you have two 2D images and when projected either film or digital, you get a proper 3D effect. So yes, it is possible for a single camera to pick up two 2D images and get a proper 3D effect. I know it's pretty tricky but since this is a digital camera, there would be a software issue and if that issue isn't resolved and they put it in there, you'll get a 3D image that has very little or no depth at all.
http://en.wikipedia.org/wiki/IMAX#IMAX_3D

The answer lies in something known as Fusion Camera System the same 3D camera system that was used to shoot Avatar, the same camera system was used on other films such as Spy Kids 3D, Journey to the Center of the Earth, and Tron Legacy. Just imagine the same imaging system inside a RED Epic XS35 and shooting at 4K for both left and right eye images.
 
Last edited:
Robert, really you need to investigate S3D shooting and conversions a bit more before posting again on this subject because your posts in this thread merely show that you don't really know the basics of stereography

To get a stereo image you really need to combine two perspectives. To shoot S3D you have roughly 3 options:
1) shoot with two cameras (side by side or using a beamsplitter)
2) shoot with a single camera using a prism to put two lenses on it, thus splitting the frame into two views.
3) shoot with two sensors through a single lens (I've only seen theoretical proposals on this).

Pedro placed photos of various rigs here: http://reduser.net/forum/showthread.php?t=29034

For post-conversion from 2D to 3D here's a video that explains the proces a little:
http://www.wired.com/underwire/2009/08/video-how-imax-wizards-convert-harry-potter-to-3-d/

Some new 3D TV's do attempt to create faux-3D from moving images in realtime, using the motion parallax to estimate depth but all bets are off on that one, and although it works okay-ish on few shots, most of the time it's utter rubbish.

Barend
 
I've seen those before, but I haven't seen that video before. When I saw Half-Blood Prince in IMAX, only the first few minutes was in 3D. Now the last two are going to be shot entirely in 3D.

I know some of you are very skeptical about my idea so I decided to see for myself what the outcome would be from such a system. I don't know if it's going in the right direction or if it's a waste of time.
single3dcamera.jpg
 
There's still no depth to the image. Technically what your proposing is impossible.
I would love it not to be, but unfortunatly its not that simple.
 
Well the idea is based on what the outcome from shooting with an IMAX 3D camera. Now an IMAX 3D camera is not required to capture depth, just two images side to side and on a 65mm negative.

So what about a single camera that captures this with a single lens:
1031059sidebyside.jpg
 
Last edited:
I still don't understand how this could possibly work, the IMAX 3D records to 2 reels, thus you would need two sensors in the Epic, so that the convergence/parallax can be set to give the 3d effect. How do you plan to do that when there is no parallax in a 2d image as taken with a single sensor. When there is no parallax the image must be recreated in Maya/3d compositor to give a 3d effect. It cannot be done by just shaving the corners and offsetting the image and applying a 3d filter to the final image.
 
How do you plan to do that when there is no parallax in a 2d image as taken with a single sensor?

Well in that case, we stop looking at the software issue and look at the hardware issue? Now for one is there a such thing as a parallax sensor? Maybe not but that doesn't mean that one can't exist, now in theory it would use my system of adding more horizontal lines but the image would be chopped perfectly and the 2 parallax sensors would take both images, capturing depth as well as left and right eye images at 4K resolution. Now there may be the need for two sensors but the idea is still to aim for just one sensor.

Now with just one or two parallax sensors, you 3D outcome should look like this:
Parallax.gif


Now Jay, is that what you would want to see from such a one camera system?

http://en.wikipedia.org/wiki/Parallax
 
Ok now I'm starting to see where your going now.
It may be possible with a custom lens for the the sensor to essentially be split in half (North South) and record an S3D image onto a single sensor. And then reconstruct as usual. But the lens would have to have some form of dual lens and then take a side by side image and place it top and bottom. I believe Sony and Panasonic have been working on something like this already.

But at that stage I think your up to the price of a dual camera system anyway.
 
So yes, it is possible for a single camera to pick up two 2D images and get a proper 3D effect. I know it's pretty tricky but since this is a digital camera, there would be a software issue and if that issue isn't resolved and they put it in there, you'll get a 3D image that has very little or no depth at all.
http://en.wikipedia.org/wiki/IMAX#IMAX_3D

No. It is not a software issue.

It is a physical hardware issue - you need to get two images projected FROM DIFFERENT POINTS OF VIEW.

Not two images from the same point of view.

For that you need a clever stereo lens system - of which there are many already and I'm sure there will be ones introduced in the future.

Not a "parallax sensor." Not "adding more lines"

Eg like this:
http://www.loreo.com/pages/cameras/camera_3dcap_1.html
Except add an anamorphic squeeze as well.

Or to use the IMAX 3D camera example, this:
http://viperfs.com/VFXTLK/imax.JPG

To me that was always the point of the largest RED sensor sizes - they obviously were designed to for "medium format stereo".

But the same procedure works on a FF35 or S35 sensor as well.

Now an IMAX 3D camera is not required to capture depth, just two images side to side and on a 65mm negative.

No. you need left-right difference. Eg parallax. Stuff that provides depth information.

Bruce Allen
www.boacinema.com
 
i'd like to add a little something, although i am not a 3D expert, too.

i couldn't get my head round the question how to pull the right eye information from the material if it's just not there... rotoscoping? *yawn*
adding a 2nd camera in after effects? nope. that's just like someone flicks your eyes.
panning a duplicated layer in after effects + adding a 2nd camera? wait a minute. that's just as if i did nothing at all. :eek:hmy:

so what i tried in an experiment was:

1. perform a stable-as-possible shot h.o.r.i.z.o.n.t.a.l.l.y.
2. put the shot in an after effects comp.
3. duplicate the layer and move the uppermost 1 or 2 frames to the right, so it gives you the "right eye" information, pulling it from 1 or 2 frames beyond... *tumbleweed*
4. apply the ae_onboard "3D-glasses" filter to the lowermost layer, turn the uppermost off.
5. assign the lowermost layer to the left eye, the uppermost to the right eye
6. render in anaglyph.

i am well aware of the fact that this is far from real 3D and it doesn't work with vertical camera moves but if well performed the result is quite "...".

notice(s):
- if our camera move is shaky this method will crush your eyeballs and so will pushing it more than 3 frames. (looking at a freeze frame instead with anyglyph glasses it'll gain more depth the further you push your uppermost layer)

now i am just waiting for comment like: "nah... did that years ago.."
 
Sebastian, I can tell you and everyone else why my theory on the system holds water.

The sensor in the RED Epic XS35 is capable of 5K, that is 5120x2700. Part of that sensor would be needed to capture two 4K images.

This is the example of the XS35 sensor on the RED website:
http://epic.red.com/epic.html

This is my example of that same sensor capturing 2 4K images for 3D:
redepicxs353dexample2.jpg

As you can see, the XS35 would capture more then what would be required for a true S3D image, and you don't even need no fancy lens if fits on the camera, then it's usable, but if this system is put in to use some people would make special lenses to help achieve a higher level of depth. This is just an example of someone shooting at 1.78:1 at 4K resolution in 3D. As you can see the sensor doesn't add more horizontal lines, those extra horizontal lines are already there, the reason why they are red and blue was to use anaglyph colors to demo this for you. The full range of the sensor makes the images too far apart for a pair of eyes to line them up with 3D glasses so I just went ahead and stretched them as far as they would go anyway. It's actually a full size 5K image made in Microsoft Digital Imaging Standard 2006, the 4K images are also full size. So where is the full size 5K size .jpg, link only:
http://img526.imageshack.us/img526/946/redepicxs353dexample3.jpg
 
Last edited:
Back
Top