How to best view the footage?

You can view 360/VR footage on any desktop or smartphone. Here’s a few tips on getting the best experience on whichever device you’re using.

Desktop

If you’re watching via YouTube on a desktop or laptop, make sure you have the quality settings turned up by clicking the little ‘cog’ in the bottom right corner and opening ‘Quality’. Most broadband connections can handle the 1440s quality. Some can handle 2160s, which is almost cinema quality. To look around just click-and-drag on the picture.

The picture above is just a window onto YouTube, click title of the piece on the window to open it directly on the YouTube site.

Smartphone

Open the piece in the YouTube app on your smartphone or go to the Prime Time Facebook page in the Facebook app and tap on the video. The phone will act as a window onto the shot, if you move the phone you can look around the location. You can also swipe the screen with your finger to move the picture.

VR headset

If you have a VR headset you probably already know how to do this, but here goes: open the piece in your preferred viewing app on your phone (YouTube or Facebook), tap the little ‘goggles’ button in the lower right hand corner, slide your phone into the headset and put in your headphones.

VR/360 fans say this is the best, most immersive, experience but some people say they get an odd ‘sea sickness’ afterwards from it. You should give it a try if you get a chance.

The VR headsets are simple plastic viewers. Basic models are sold for about €30, they contain two plastic lenses which allow you to place your phone in front your eyes and focus your eyes on the picture. If you’re viewing it in a headset and the image looks duplicated you haven’t tapped the goggles button.

Any problems, give me a shout on twitter.

 

The Making of ‘Last Days of the Flats’?

After showing several colleagues a draft edit of ‘The Last Days of the Flats’, their first question has nearly always been some version of “how it is shot?”, “what does the camera look like?” or “how is it made?”. If you’re wondering the same yourself, this blog post should explain some of the background.

The answer to the first question is quite simple. You put the tripod down and make sure the recording lights are on, then you leave it. If you’re recording an interview you record the sound separately because the microphones on the GoPro cameras aren’t great. If you’re recording a non-interview shot, you go and hide somewhere the camera can’t see you, because it sees everything in the area. Then after a minute or so you go back to the tripod and stop the recording. The shoot is the simple part really.

The answer to the second question, ‘what does the camera look like?’ Here’s a picture.

The ‘camera’ consists of a hard-plastic casing which houses six individual cube-shaped GoPro Hero 4Session cameras. Each of the six cameras records a section of the surrounding area, when the pictures are later compiled – using software I’ll get to later – they give a full ‘360’ degree view of the location. It’s sort of like having a load of cameras looking out from the inside of a football, each camera covers one panel of leather, when all the leather panels are stitched together it makes the whole picture.

The answer to “how is it made?” requires a bit more explaining.

The problem for documentary-making with this camera set-up begins with the six cameras being completely independent of each other. There’s no one ‘record’ button, so you have to make sure all six are in record mode firstly, and with the same settings each time. If one camera isn’t recording you end up with a big black square in the compiled shot (i.e. a hole in the ball), leaving the shot or – worse still – the interview completely useless.

Once you’ve filmed your interviews and shots you’ve a second, somewhat-related, issue to deal with. All the cameras have started and stopped recording at different points in time (maybe a few seconds apart but even a tiny amount is enough to cause problems). Due to the recordings beginning a different points in time, you have to bring the footage from each camera into an editing software and sync them all up, then export a new file for each camera and each shot. We used Adobe Premiere CC for this, which is consumer-level software that has a really good auto-synchronisation function, it does a lot of the hard work for you. The exporting of each file takes about three-times real-time (meaning it takes three hours to export 60 minutes, times six for six cameras. So there was a lot of files).

Once you’ve done that processing, all six files for each shot will cover the same time period and can be stitched together.

The stitching of the footage is an art in itself. RTE’s Stuart Masterson, who advised us on much of this production and is a massive well of self-learned knowledge on all things 360/VR, did the tough work on that. He used a software called Kolor.

A certain amount of the picture from each camera overlaps into the picture from the neighbouring cameras. A major problem lies in the fact that the cameras will all have automatically adjusted differently for the different light levels in the area they’re covering. So in one ‘compiled’ shot there would be sections (i.e. panels in the ball) that appear really bright and others in the same shot that would appear really dark. Stuart had to adjust the brightness levels across each shot so the compiled picture is evenly coloured. It starts off looking something like this…

The aim (aim!) is to smooth the light levels out as best as possible so there’s no hard lines between the images from each camera. That’s just part of this process.

This whole stitching process also involves choosing where to hide the stitch-lines. The overlapping on the shots between neighbouring cameras is imperfect and choosing where one ‘panel’ stops and another starts is key. It’s a really tricky process that Stuart has got his head around and become our expert on, something I myself haven’t quite figured out quite yet. Whenever I visited him his screen looked something like this . . . Your guess is as good as mine.

Once Stuart has stitched all the panels together on each shot and evened out the light levels, he then exported a new version of each shot to give back to me for editing. These new compiled shots are massive, consisting of what would normally be six shots, in one. In terms of numbers of pixels, while most TV is 1920 pixels wide by 1080 high, the footage that comes back for editing this stuff is 4096 wide by 2048 high. They’re still flat images and that look like this:

So after all that we’re left with lots really big-dimension flat video files. We edit those in the same way as we would with any other flat video files, at the start it was odd working with warped pictures. The interview pictured below looked especially odd.

We used Adobe Premiere Pro on a Mac laptop for this, the solid-state drive in the laptop handled the very detailed images better than our older desktop computers.

After all that, finally we could start putting together the story itself.

Myself and my colleague Conor Wilson wrote up a basic story structure, chose the interview clips and drafted a script. We laid the shots down in the editing software to match that structure. After that we were left with a structured story, but the sound attached to the images was still from the on-board mics on the GoPro cameras. We then had to match the audio from our externally recorded sound kit to these clips. After tonnes of syncing and tinkering, re-scripting and reordering we were happy with the story edit. All that was left to do was pick some music.

We then exported a video file of the whole story, for the audio we exported an audio compilation file to send to our Dubbing Mixer, Owen Tighe. The audio compilation file allowed Owen to open up one-hundred-plus audio files in the right order and in the correct time position. He used Pro Tools to mix the sound and fix issues with some of the recordings, and sent us back another file with all the sound clips mixed together into one single file.

After all that we’re left with one file for audio, one for pictures. We put those two together, thankfully they matched, and exported a final file with both video and audio in one. That file is then run through a very simple piece of free software called Spatial Media Metadata Injector. That software adds some data to the file to tell YouTube and Facebook that their servers are receiving a 360/Virtual Reality video, rather than a flat video file.

After it has been uploaded the websites take a bit of time to process the file and wrap it into a ball that the viewer can look out at from inside, but once it’s done you can get your headset and get watching, or click-and-drag on your desktop. If you turn your YouTube quality settings up to or near the maximum, the footage will at the same quality as, or higher than, most TV broadcasts.