Implementing a 360º Video Viewer with SpriteKit and SceneKit

A few days ago I published a post that showed how, with relatively little code, you could create a primitive 360º panoramic photo viewer in SceneKit. In this mini-post I am going to show how by adding SpriteKit and a pinch of AVFoundation to the mix that example can be extended to play 360º video.

Still shot from 360º video of balloon flight.
Still shot from 360º video of balloon flight.

Getting Started

The starting point for this project is a slightly refactored version of the PanoView project. To keep things simple I created a new repository 360Video on github where you can download the starter project.

Details of the code in the methods createSphereNode(material:), configureScene(node:) and startCameraTracking() can be found in the post SceneKit and CoreMotion in Swift.

Using Video as a Material in SceneKit

The process for using video as a material is a little convoluted. First you need a URL for a 360º video stream, not an easy task since most are siloed behind YouTube and Facebook players, then you create an AVPlayer to play the video, wrap the player in an SKVideoNode, add the node to an SKScene and use that SpriteKit scene as the material for your SceneKit geometry. Although this may sound a bit scary, the code is not actually too bad:

//let urlString = "http://all360media.com/wp-content/uploads/pano/laphil/media/video-ios.mp4"
let urlString = "http://kolor.com/360-videos-files/kolor-balloon-icare-full-hd.mp4"
guard let url = NSURL(string: urlString) else {
fatalError("Failed to create URL")
}

let player = AVPlayer(URL: url)
let videoNode = SKVideoNode(AVPlayer: player)
let size = CGSizeMake(1024,512)
videoNode.size = size
videoNode.position = CGPointMake(size.width/2.0,size.height/2.0)
let spriteScene = SKScene(size: size)
spriteScene.addChild(videoNode)

Then change the nil material in call below:

let sphereNode = createSphereNode(material:spriteScene)

Finally make sure to call play(sender:) on the SpriteKit scene.

override func viewDidAppear(animated: Bool) {
sceneView.play(self)
}

You should now be able to compile and run the project and enjoy either a balloon ride or sitting amongst the L.A. Philharmonic, depending on which URL you enable.

Conclusion

These few lines of code took a surprisingly long time to write, mainly because video texturing does not seem to work in playgrounds… and does Apple mention this in the documentation? Like, anywhere? If you guessed no for this one, you would be correct. So it was always unclear whether I was doing something wrong or whether it was a bug. Similarly an additional hack is needed to get this code to work on OS X. I will update this post with radar numbers and the OS X hack at a later time.

Putting this frustration aside, it is pretty cool that you can do this with so few lines of code. I should confess that the code does take some liberties: it hardcodes the video size, there is some slight of hand in the camera tracking to account for differences in the SpriteKit and SceneKit coordinate systems, and the tracking is prone to gimbal lock. In future posts I’ll show how to correct these problems.

The completed code can be found at the 360Video repository on GitHub.


Posted in Code, Swift | Tagged , , , | Leave a comment

Getting Started with ModelIO

The Model I/O framework was introduced at WWDC 2015, however there is remarkably little sample code out there. Over the past few days I have been trying to write reliable code to import an .OBJ file and apply textures to it. In this article, in the hope that it will save some others some pain, I’ll explain how I got it to work.

Getting Started

To get started download the starter project from GitHub. This project contains a single view app , some graphical resources and a 3D model. The model comes from the Free Star Wars Model Pack from Video Copilot.

Loading the OBJ File

Let’s get started by loading the OBJ file. An OBJ file can be loaded directly into an MDLAsset from a URL.

Open the file ViewController.swift and add the following code to viewDidLoad:

// Load the .OBJ file
guard let url = Bundle.main.url(forResource: "Fighter", withExtension: "obj") else {
    fatalError("Failed to find model file.")
}

let asset = MDLAsset(url:url)
guard let object = asset.object(at: 0) as? MDLMesh else {
    fatalError("Failed to get mesh from asset.")
}


This code determines a URL for the OBJ model file in the app’s bundle and loads in into a MDLAsset. It then extracts the mesh from the asset. This will be used in the next section to create a SceneKit SCNNode to wrap the model.

Displaying the Scene

To check that the model has loaded correctly it would be nice to display it on the screen. This is a two step process: (1) create a SceneKit scene containing the model and (2) display the scene in a view.

To create the scene add the following code after the code in the previous section:

// Wrap the ModelIO object in a SceneKit object
let node = SCNNode(mdlObject: object)
let scene = SCNScene()
scene.rootNode.addChildNode(node)


The above code creates an empty SCNScene, an SCNNode containing the model and then adds the model node to the scene.

To display the scene add the following code:

// Set up the SceneView
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
sceneView.scene = scene
sceneView.backgroundColor = UIColor.black


The sceneView variable here is a computed property in the starter project that refers to an SCNView in the storyboard. The autoenablesDefaultLighting is a convenient option for getting some basic lighting going with minimal code. The allowsCameraControl allows you to rotate, pan and zoom the model via touch gestures. Once these options are set all the remains is to set the `scene` property to the scene from the previous code section and set the background to black (the default material is light grey so it shows up better against black).

At this point you can run your project and if all is well you should see something like this (after a little rotating and zooming).

The space fighter model with no textures applied.
The space fighter model with no textures applied.

Texturing the Model

In Model I/O to texture a model you create an MDLMaterial and add properties to it to describe the material.

To create the material add this code just after the extracting the MDLMesh from the model:

// Create a material from the various textures
let scatteringFunction = MDLScatteringFunction()
let material = MDLMaterial(name: "baseMaterial", scatteringFunction: scatteringFunction)


This creates a material with the a default MDLScatteringFunction object.

For each of our textures we need to determine the URL for the texture image, create a MDLMaterialProperty and then add this to the material. Repeating the same few lines of code for each texture is tedious, so instead I created an extension on MTLProperty to simplify this.

Add the following code at file scope:

extension MDLMaterial {
    func setTextureProperties(textures: [MDLMaterialSemantic:String]) -> Void {
        for (key,value) in textures {
            guard let url = NSBundle.mainBundle().URLForResource(value, withExtension: "") else {
                fatalError("Failed to find URL for resource \(value).")
            }
            let property = MDLMaterialProperty(name:value, semantic: key, URL: url)
            self.setProperty(property)
        }
    }
}


This extension lets us pass a dictionary of MDLMaterialSemantic and String pairs, the former is an enum that specifies what the property is intended to be, for example .BaseColor, .Specular or .Emission

With this extension in place we can specify the three textures for this material compactly as follows:

material.setTextureProperties([
    .baseColor:"Fighter_Diffuse_25.jpg",
    .specular:"Fighter_Specular_25.jpg",
    .emission:"Fighter_Illumination_25.jpg"])

The final step of texturing is to apply the material to all the submeshes of the object:

// Apply the texture to every submesh of the asset
for  submesh in object.submeshes  {
    if let submesh = submesh as? MDLSubmesh {
        submesh.material = material
    }
}

You can compile and run your project. You should now see a much more impressive model.

The texture space fighter.
The textured space fighter.

Conclusion

Model I/O does allow you load models with relative few lines of code, but some of the steps are unintuitive, for example, it took me quite some time to figure how to attach the textures to the model. There also seem to be some bugs and poorly documented areas. Some additional sample code from Apple would do a lot to clear up some of these mysteries.

I hope the code I’ve presented here will help you avoid some of the pitfalls I made. The completed code can be found in the GettingStartedWithModelIO repository on GitHub.

Ads on this site help keep the servers running. Please take a moment to peruse our sponsors and perhaps visit one.


Posted in Uncategorized | 5 Comments

ShinkansenSpeed: MapKit and CoreLocation in Swift

This article explains how to use CoreLocation and MapKit to display a location and speed on a map.

Last year I was in Japan and I was traveling on some seriously speedy trains. I was gutted (not happy) that the “bullet train” (Shinkansen, 新幹線) did not have a speedometer in the carriage that would let me know how fast I was going. But I had a laptop and a phone, so this is what I built…

Getting Started

To get started download the template project from Project Template. This is a simple, single view project with an MKMapView occupying the entire screen, overlaid with a UILabel to display the speed. These are linked to the mapView and speedLabel IBOutlets respectively.

May I Know Where I Am?

Location data is sensitive so we first must ask permission access it. Add the following to the `viewDidLoad` in `ViewController.swift`:

[swift]
// Please sir, may I know where I am?
if(CLLocationManager.authorizationStatus() !=
    CLAuthorizationStatus.AuthorizedWhenInUse)
{
     self.locationManager.requestWhenInUseAuthorization()

}
[/swift]

Tell Me Where I Am!

There was no `activityType` matching train and I had a power outlet at my seat, so I greedily configured my the best accuracy. To minimize the drain on the user’s battery you should try to choose the minimum `desiredAccuracy` that is sufficient for your needs.

Add the following code to configure the location manager to report at the best accuracy.

        // Configure location manager
        locationManager.activityType = CLActivityType.OtherNavigation
        locationManager.desiredAccuracy = kCLLocationAccuracyBest
        locationManager.delegate = self
        locationManager.startUpdatingLocation()

Show Me Where I Am

The location manager in the previous section tells us where we are in terms of longitude, latitude and speed, but to really visualize it we need a map.

Add the following code to map the current location:

        // Configure map view
        mapView.showsUserLocation = true
        mapView.setUserTrackingMode(MKUserTrackingMode.Follow, animated: true)

Don’t Go Asleep On Me

To give constant display you can add the following:

        // Stop the display going asleep
        UIApplication.sharedApplication().idleTimerDisabled = true;

How Fast Am I Going

And finally… We come to the main part of the program. This function updates the label with the speed.

    /**
     Called when location changes, updates speed in speed label.
     */
    func locationManager(manager: CLLocationManager, didUpdateToLocation newLocation: CLLocation, fromLocation oldLocation: CLLocation) {

        if(newLocation.speed > 0) {
            let kmh = newLocation.speed / 1000.0 * 60.0 * 60.0
            if let speed = ViewController.numberFormatter.stringFromNumber(NSNumber(double: kmh)) {
                self.speedLabel.text = "\(speed) km/h"
            }
        }
        else {
            self.speedLabel.text = "---"
        }
    }

Conclusion

The completed code for the tutorial can be found in the Shinkansen GitHub repository.

If you enjoyed this article please consider taking a moment to visit one of our sponsors!


Posted in Uncategorized | Leave a comment