There are many different ways the system can be configured, which will be appealing to AV integrators. Let’s look at some of the variables:
The volume and lighting
First, you need to set the scene with a large backdrop that will display the virtual environment. This is called “the volume.” The size, shape and media of this volume will vary according to budget, space, and most importantly, your customer’s goals. What are they trying to achieve? And what type of media will help tell the story that delivers that result?
If they’re looking for atmospheric imagery that will always be in the deep background of a camera shot or a live stage setup, projection might work. But if they want flexibility in how they’re shooting, or they want to add a lot of interactivity to their content, LED might be the way to go.
With LED, budget and the scale of the stage will determine pixel-pitch. Remember, there is a big difference between how a camera sees LEDs compared to the human eye. The camera will pick up on nuances and color problems that we can’t see. One of the biggest issues related to this is the moiré effect, where the tight pattern of lines between pixels on LED walls creates wavy lines in your video image. An example of this could be when you point a camera at someone wearing a checked pattern, and the video image ends up looking distorted because the camera’s refresh rate struggles with the tiny squares on the fabric.
There are things you can do to avoid moiré. But check with your camera and LED screen manufacturers first to find out if your setup will be compatible. Those conversations will be helpful too, because soon there might be more options – many LED display manufacturers are developing products that will work better with cameras. Also, make sure your video wall processor is powerful enough to keep latency low, so you can avoid sync problems between the content on the screen and the movements of the camera. LEDs also have the added benefit of providing some lighting support, and ceiling and floor displays can also help create realistic reflections on physical set pieces and presenters.
Cameras and sensors
You’ll need to calibrate color reproduction between the wall and camera — either a manually operated camera or a production-quality PTZ. Cameras are a whole other topic, as there are compatibility issues between which one you’ll want to use with certain LED screens. If the refresh rate of the camera is different than the screen, you can end up with visual artifacts or dropped frames.
Cameras are tracked using infrared markers or sensors, which follow the shot and shift the virtual background based on the position and movement of the capture device. Tracking technologies are continuing to improve, making it easier to present realistic and dynamic content.
Media servers
Media servers enable real-time content changes, which enhances interactivity and makes it easier to make certain adjustments quickly. This makes the whole content pipeline faster, making it feel more like a live performance instead of standard media playback.
When it comes to choosing between 2D and 3D, keep in mind that the answer isn’t always to do the most technically intense 3D workflow for the ultimate cool factor. Sometimes a 2D background is more than enough. Several video production systems on the market will readily handle 2D content, including virtual backgrounds and on-screen graphics.
If you want to go into 3D, then a game engine needs to be added to the workflow. These are used to create backgrounds and 3D environments that can be navigated — just the way a player would move through a game’s world. The background can move in any direction or velocity needed to allow the presenter to “move” through a space without physically stepping beyond the volume’s edges.
The size of the volume will vary — for film and television production, virtual production sets are gigantic. But even a small space can also be used, due to the miracle of set extension. That’s when you use a media server to stitch a virtual background in real-time outside the LED walls. So, if your presenter is standing in front of a 12’ wide x 8’ tall LED screen, you can make it look like they’re standing in a volume that’s twice that size, or more.
Remember, the real-life image captured in-camera is composited in real-time with the virtual scene for live streaming, broadcast or recording. So green-screen rules still apply — people in the room won’t see the extended set. And if someone steps beyond the volume’s edges, they will disappear in the video output. But wherever the audience sees the stitched-together image, either on an IMAG screen somewhere else in the venue, or on any other video platform, they’ll get the full experience.
Keep in mind, media servers are becoming more powerful all the time, so real-time content changes are much faster now. And those media server companies are also working diligently to make it easier to use these advanced systems. They’re creating accessible interfaces that merge screens, mapping and compositing together for ease of use.
Control
Once the content assets are loaded into a content management system, they can be modified for any presentation, such as swapping out logos and rearranging pieces to tell stories in a narrative that suits the purpose of a meeting, classroom exercise or message. For smaller streamlined setups where there is only one presenter with a single microphone the control of the full audio and video system is handled via presets on an iPad or a simple video production push-button interface.