Welcome to our community

Be a part of something great, join today!

  • Hey all, just changed over the backend after 15 years I figured time to give it a bit of an update, its probably gonna be a bit weird for most of you and i am sure there is a few bugs to work out but it should kinda work the same as before... hopefully :)

3D intentions using RED Cameras & Need some advice ...

PatrickWebb

Well-known member
Joined
Dec 5, 2020
Messages
389
Reaction score
11
Points
18
Location
Trondheim, Norway.
G'day all,

I'm looking into some 3D design to make a virtual gallery and concert hall for a client. I will have some help from some tech crew. However, I wanted to pick some minds here first before getting into the initial developments.

As far as the concept goes, I will be filming and taking photos of an establishment that has several rooms. These rooms will be utilised as virtual gallery spaces, essentially to sell work online. My contact in a local university has a 3D "scanning" machine – here is where it gets a little blurry for me, – that scans a room and then creates a 3D model of the space. Now, I am not 100% sure that this does a great job, especially rendering details. This is where my thoughts come in on using an EPIC and the internal XYZ gyro to help drive the virtual space and then possibly film that space, capturing in 5K, 6K, 8K whatever, those details and have them transferred into / onto that digital / virtual model.

Now, is that something that is possible here? Or is it more rigid that I like to imagine here?

If so, would I then need to take the camera into the place, already having the virtual place scanned and scaled and then use the camera after, program it somehow into the Virtual World, then translate the XYZ Gyro in relation to the 3D world, then take hi res stills from the camera and implement them into that world?

Gosh, I feel like that it is getting complicated there.

Essentially, I want to get as close to the real thing, representing the space(s) as true to life as possible. Afterwards, the artworks, or item of sale, would be 3D-ally created by creating the image first and then setting that image upon the 3D framework designed for, let's say a painting ~ essentially a 2D piece. However, my client wants even more details: right up to the relief/texture of the paint strokes on an art piece.

It is ambitious, I know, however, I feel it is do-able to this degree of detail.

Then, when the 3D world has been generated / created, I can get my DoP nerd on and light the damn place up with the 3D designers.

Let me know if I should clarify something here! It is pretty in depth. I have been on the organising of this for a while and I've been wanting to implement the camera here successfully into the mission!

Cheers in advance for the consideration to help here! :driving:
 
You can do photogrammetry with your Epic, but generally for the style of this work we tap into LiDAR systems for environmental scanning. We do this somewhat often for feature films where we need to perform VFX set extensions or interactive environmental effects. Very useful for tracking and lighting purposes as well.
 
You can do photogrammetry with your Epic, but generally for the style of this work we tap into LiDAR systems for environmental scanning. We do this somewhat often for feature films where we need to perform VFX set extensions or interactive environmental effects. Very useful for tracking and lighting purposes as well.

Hi Phil,
Thanks for the reply. I thought so. I have no idea where to start. I guess by reading up on "photogrammetry" is now my next mission.

I will check into LiDAR, too.

Cheers!
 
RED Epic cameras are much more compact than RED One cameras, they also have a modular design, try it
 
Edit: Sorry, just realized I resurrected a year old thread. How did the project turn out?


Are you wanting this to be like a guided tour or an open space where you're free to move around?

​​​​​I gather by the reference to details on brush strokes that this would ideally be a program/app their clients can control and inspect various angles on the art pieces, ie VR headset?

Meshroom is a free photogrammetry program that I used to create a 3d model which was then imported into Unreal Engine (also free up to 1mil profit) to build a virtual tour of an event space for a charity.

You can use motion capture, however stills will make the process less convoluted. If you've got access to the 3D scanner (hopefully LiDar based) that would provide you with a more accurate depth map, then you can use the photogrammetry model purely to wrap textures over that model.

Basic process includes shooting various angles of still images all over the workplace (video leads to an overabundance of frames for the program to calculate) at the same exposure (high f-stop, low iso.. motion blur and bokeh are bad for photogrammetry). Taking care to cover windows and if possible any other highly reflective surface-- reflections and transparency confuse the algorithm as well as really light or really dark objects. Import these pictures into the program then send it to start calculating. Walk away for several minutes to several days depending on the amount of cloud points (camera angles) and the strength of your computer system. Skipping tons of steps optimizing, reducing vertices, etc, once your model is complete, import it into Unreal Engine. From there, you can select from many available templates to give you a moveable character. Unreal engine makes it easy to port it for use not only on PC, but also gaming consoles, smartphones, and VR headsets.

This is less than a "Cliff's Notes" with much information missing, just a basic outline. There are programs other than the ones I've mentioned that can get you to nearly the same place. I just mentioned these as they only require investing time in learning the software.

if you really want to use video clips in the final product it would probably have to be more like a guided tour. Could just grab a 360° camera, take pictures at the "stepping stones" then record video moving between those landmarks. That would give you something similar to realty virtual tours minus the nausea inducing moves between viewpoints.
Maybe download Meshroom and grab some photos of a small room in your house so you can experiment a bit and see what kind of angles you'll need. Too many and it'll take forever to calculate and you increase the possibility for defects, too few and you'll see missing spots in the model (in Meshroom you can go back afterwards and add additional cloud points if you missed an area)

As for updating the artwork in the gallery without having to roll out a new version of the app, I wish you luck. I'm sure there's a way to have the app pull additional models from a server somewhere, but that's a beyond my experience.

Hopefully this helps a bit.


​​​​​​Michael
 
Hi Michael,
Thanks for the response.
Yeah, I ended up using another system, as using the EPIC could have been an extremely involved process for this job. I cannot remember the name of the device, but we had a member of the university bring in their 3D scanner and that made a good mock-up of the room and a decent representation of the artworks in the gallery space. I photographed all the artworks in a light controlled tent with the Epic however, and that turned out great. Otherwise, the dedicated machine worked like a charm for this project.
It was a bit of a messy project with a lot of money thrown around in strange places – but the result is there and I think that the client is more than happy. I left the project about 6 months ago and couldn't be happier.

Cheers.
Patrick
 
As for 3D I have a printer Creality and can create some things and details for greater clarity. But I need a program for 3D design and maybe I'll still need some video tutorials on how to use it, if someone advises here, it will be great.
 
Back
Top