Stereoscopic 3D Flash Augmented Reality demos

This is an example of how you can use ActionScript3 with FLAR and Papervision3D to implement Flash-based stereoscopic augmented reality applications compatible with Vuzix Wrap 920 AR video see-through augmented reality glasses. Monoscopic examples, for webcams and monoscopic AR HMD’s (Like iWear VR920+CamAR) are also provided. Such applications are multiplatform, could be run from a web-browser and you can easily share them via Internet.




Stereoscopic AR demo viewed in Adobe Flash Player 10 with Vuzix Wrap 920 AR

Demo imports Collada DAE model and visualize it on visual AR marker over video stream, using one (for monoscopic) or two (in case of stereoscopy) video streams. Stereoscopic version use left-eye video stream for marker recognition, and calculates right-eye perspective for virtual object. This was made to improve overall speed of the application. Such calculations for camera offsets can be used for upcoming optical see-through AR glasses (like Vuzix STAR 1200).

These demos use FLAR Toolkit (regular version, and Alchemy branch) by Saqoosha, Ryo Iizuka and Papervision3D by Carlos Ulloa Matesanz. We recommend using Alchemy branch of FLAR Toolkit, as fastest variant. Read comments in source codes for more details on implementation.

To try demo you will need latest Flash player to be installed in your web-browser, then:

         1. Print out this marker (8x8 cm): ar_marker.pdf
        

         2.1. Run stereoscopic demo to use with Vuzix Wrap 920 AR: FlarAlchemyVuzixWrapAR
         2.2. Run monoscopic demo to use with regular webcam: FlarAlchemyMonoscopic
         3. Click “Allow” when Flash player prompted to access camera
        

         4. Look on the printed marker through AR glasses, or show the marker to a webcam
         5. Click on the video to make it full screen
         6. Double press “Back” button on Wrap AR control box to turn on side-by-side stereoscopic mode

You can freely use provided source codes of these examples to create your own applications based on FLAR Toolkit and Papervision3D.

Download full source code here (Adobe Flash Builder 4 workspace): FLAR_Examples.zip

Code of the most complete example provided in FlarAlchemyVuzixWrapAR.as
This is example of Augmented Reality with side-by-side stereoscopic rendering, using FLAR (alchemy branch):
//========================================================================================================================= //Stereoscopic Augmented Reality demo for Vuzix Wrap 920 AR by Maxim Lysak, Viktor Kuropyatnik (VRM team http://3dvrm.com/) //Based on FLAR Toolkit (alchemy branch) by Saqoosha, Ryo Iizuka and Papervision3D by Carlos Ulloa Matesanz // //Get latest FLAR Toolkit alchemy branch here (use SVN): //http://www.libspark.org/svn/as3/FLARToolKit/branches/alchemy // //Get latest Papervision3D here (use SVN): //http://papervision3d.googlecode.com/svn/trunk // //Add ".../src" folder from your Flar Alchemy directory, //and ".../as3/trunk/src" from Papervision3D directory //to your ActionScript source path. //Do this by: Project properties -> ActionScript Build Path -> Source path -> Add Folder... // //You also must include FLARToolKit.swc to your project to compile //Do this by: Project properties -> ActionScript Build Path -> Library path -> Add SWC... //========================================================================================================================= package {     import flash.display.Bitmap;     import flash.display.BitmapData;     import flash.display.Scene;     import flash.display.Sprite;     import flash.display.StageDisplayState;     import flash.events.Event;     import flash.events.MouseEvent;     import flash.media.Camera;     import flash.media.Video;     import flash.utils.ByteArray;     import org.libspark.flartoolkit.core.FLARCode;     import org.libspark.flartoolkit.core.FLARParam;     import org.libspark.flartoolkit.core.FLARRgbRaster;     import org.libspark.flartoolkit.core.FLARTransMatResult;     import org.libspark.flartoolkit.detector.FLARSingleMarkerDetector;     import org.libspark.flartoolkit.support.pv3d.FLARCamera3D;     import org.libspark.flartoolkit.support.pv3d.FLARMarkerNode;     import org.papervision3d.core.math.Matrix3D;     import org.papervision3d.lights.PointLight3D;     import org.papervision3d.materials.shadematerials.FlatShadeMaterial;     import org.papervision3d.materials.utils.MaterialsList;     import org.papervision3d.objects.parsers.DAE;     import org.papervision3d.render.BasicRenderEngine;     import org.papervision3d.scenes.Scene3D;     import org.papervision3d.view.Viewport3D; [SWF(width="640", height="480", backgroundColor="#000000")] public class FlarAlchemyVuzixWrapAR extends Sprite { [Embed(source="ARMarker16x16.pat", mimeType="application/octet-stream")] //Marker pattern private var pattern:Class; [Embed(source="camera_para.dat", mimeType="application/octet-stream")] //Camera configuration file private var params:Class; private var fparams:FLARParam; private var mpattern:FLARCode; private var vid1:Video; private var vid2:Video; private var cam1:Camera; private var cam2:Camera; private var bmd1:BitmapData; private var bmd2:BitmapData; private var capture1:Bitmap; private var raster1:FLARRgbRaster; private var detector1:FLARSingleMarkerDetector; private var scene:Scene3D; private var camera:FLARCamera3D; private var container:FLARMarkerNode; private var vp1:Viewport3D; private var vp2:Viewport3D; private var bre:BasicRenderEngine; private var trans1:FLARTransMatResult; private var trans2:FLARTransMatResult; private var trans1p:Matrix3D; private var trans2p:Matrix3D; private var transdeltap:Matrix3D; protected var canvasWidth:int; protected var canvasHeight:int; protected var captureWidth:int; protected var captureHeight:int; protected var videoWidth:int; protected var videoHeight:int; private var skipframe:int; private var _threshold:int = 110; //Change threshold to make merker "visible" to camera (110 default) private var collada_model:DAE; private var ar:Array; public function FlarAlchemyVuzixWrapAR() { //Resolution of video for marker recognition this.captureWidth = 320; this.captureHeight = 240; //Resolution for video output this.videoWidth = 320; this.videoHeight = 240; //Resolution of canvas this.canvasWidth = 640 this.canvasHeight = 480; setupFLAR(); setupCamera(); setupBitmap(); setupDAE(); setupPV3D(); //Init array to retrieve camera matrix from detector ar = new Array(0,0,0,0, 0,0,0,0, 0,0,0,0, 0,0,0,0); addEventListener(Event.ENTER_FRAME, loop); stage.addEventListener(MouseEvent.CLICK, _handleClick); //On click - go fullscreen } private function goFullScreen():void { if (stage.displayState == StageDisplayState.NORMAL) { stage.displayState=StageDisplayState.FULL_SCREEN; } else { stage.displayState=StageDisplayState.NORMAL; } } private function _handleClick(e:MouseEvent):void { goFullScreen(); } private function setupDAE():void { //Load collada model collada_model = new DAE(); collada_model.load("model/cow.dae"); //And setup it's initial parameters collada_model.scale = 40; collada_model.rotationX = 90; } private function setupFLAR():void { //Initialize FLAR Toolkit by loading a camera parameters and AR marker fparams = new FLARParam(); fparams.loadARParamFile(new params() as ByteArray); mpattern = new FLARCode(16, 16); mpattern.loadARPattFromFile(new pattern()); } private function setupCamera():void { //Initialize two video streams for stereoscopic vision vid1 = new Video(captureWidth,captureHeight); vid2 = new Video(captureWidth,captureHeight); //Init left camera cam1 = Camera.getCamera("0"); cam1.setMode(captureWidth,captureHeight,30,false); //Init right camera cam2 = Camera.getCamera("1"); cam2.setMode(captureWidth,captureHeight,30,false); vid1.attachCamera(cam1); vid2.attachCamera(cam2); //Re-init resolution here, if you want to visualize background video feed with higher rez //cam1.setMode(videoWidth,videoHeight,30,false); //Re-init resolution here, if you want to visualize background video feed with higher rez //cam2.setMode(videoWidth,videoHeight,30,false); //Set video-output viewport for left camera vid1.width = canvasWidth/2; vid1.height = canvasHeight; //Set video-output viewport for right camera vid2.x = canvasWidth/2; vid2.width = canvasWidth/2; vid2.height = canvasHeight; //Add them to rendering addChild(vid1); addChild(vid2); } private function setupBitmap():void { // Setup ARToolkit // Notice that we setup only left-eye camera for marker detection // Right eye perspective will be calculated, this made for speed increasing bmd1 = new BitmapData(captureWidth, captureHeight,false,0); bmd1.draw(vid1); raster1 = new FLARRgbRaster(captureWidth,captureHeight); raster1.setBitmapData(bmd1); fparams.changeScreenSize(captureWidth, captureHeight); // Setup marker detector // Note - 80 is real-world size of the marker size in millimiters // This value is crucial for correct stereoscopic visualization detector1 = new FLARSingleMarkerDetector(fparams, mpattern, 80); // Set continue mode to true corrects spontaneous jumps of virtual objects detector1.setContinueMode(true); } private function setupPV3D():void { //Setup Papervision3D scene = new Scene3D(); camera = new FLARCamera3D(fparams); container = new FLARMarkerNode(1); scene.addChild(container); //--------------------------------------------------Draw your scene here container.addChild(collada_model); //Our previously loaded collada model //---------------------------------------------------------------------- //Initialize render engine bre = new BasicRenderEngine(); //And transformation matrices //We will need them later trans1 = new FLARTransMatResult(); trans2 = new FLARTransMatResult(); transdeltap = new Matrix3D(); trans1p = new Matrix3D(); trans2p = new Matrix3D(); //Setup left 3D viewport vp1 = new Viewport3D(captureWidth,captureHeight); vp1.scaleX = (this.canvasWidth / this.captureWidth)/2; vp1.scaleY = this.canvasHeight / this.captureHeight; vp1.x = -4; // 4pix ??? addChild(vp1); //Setup right 3D viewport vp2 = new Viewport3D(captureWidth,captureHeight); vp2.scaleX = (this.canvasWidth / this.captureWidth)/2; vp2.scaleY = this.canvasHeight / this.captureHeight; vp2.x = -4 + this.canvasWidth/2; // 4pix ??? addChild(vp2); } private function loop(e:Event):void { //Refresh video bitmap and raster for recognition bmd1.draw(vid1); raster1.setBitmapData(bmd1); // Marker detection var detected:Boolean = false; try { //Detect marker detected = (detector1.detectMarkerLite(raster1, _threshold) && detector1.getConfidence() > 0.3) } catch(e:Error){} if (detected){ container.visible = true; //Get camera transformation object from detector detector1.getTransformMatrix(trans1); //And put it's data to array trans1.getValue(ar); //Assign temporary values for readability in formulas var m00:Number = ar[0]; var m01:Number = ar[1]; var m02:Number = ar[2]; var m03:Number = ar[3]; var m10:Number = ar[4]; var m11:Number = ar[5]; var m12:Number = ar[6]; var m13:Number = ar[7]; var m20:Number = ar[8]; var m21:Number = ar[9]; var m22:Number = ar[10]; var m23:Number = ar[11]; var m30:Number = ar[12]; var m31:Number = ar[13]; var m32:Number = ar[14]; var m33:Number = ar[15]; //Obtain papervision3D type transofrmation matrix from left eye matrix trans1p.n11 = m01; trans1p.n12 = m00; trans1p.n13 = m02; trans1p.n14 = m03; trans1p.n21 = -m11; trans1p.n22 = -m10; trans1p.n23 = -m12; trans1p.n24 = -m13; trans1p.n31 = m21; trans1p.n32 = m20; trans1p.n33 = m22; trans1p.n34 = m23; //Set up translation matrix to calculate right eye matrix transdeltap.n14 = -51; //X IPD (does not 100% realistic, but best setting for Wrap 920 AR) transdeltap.n24 = 0; //Y transdeltap.n34 = 0; //Z //Obtaining right eye matrix by multiply delta matrix on left-eye matrix trans2p.calculateMultiply(transdeltap,trans1p); //Transforming result matrix back to FLAR form m01 = trans2p.n11; m00 = trans2p.n12; m02 = trans2p.n13; m03 = trans2p.n14; m11 = -trans2p.n21; m10 = -trans2p.n22; m12 = -trans2p.n23; m13 = -trans2p.n24; m21 = trans2p.n31; m20 = trans2p.n32; m22 = trans2p.n33; m23 = trans2p.n34; //Setup values back to array ar[0] = m00; ar[1] = m01; ar[2] = m02; ar[3] = m03; ar[4] = m10; ar[5] = m11; ar[6] = m12; ar[7] = m13; ar[8] = m20; ar[9] = m21; ar[10]= m22; ar[11]= m23; ar[12]= m30; ar[13]= m31; ar[14]= m32; ar[15]= m33; //And load array back to transformation object trans2.setValue(ar); //Set transformation camera matrix for the left eye container.setTransformMatrix(trans1); //Render the scene in left viewport bre.renderScene(scene, camera, vp1); //Set transformation camera matrix for the right eye container.setTransformMatrix(trans2); //Render the scene in right viewport bre.renderScene(scene, camera, vp2); skipframe = 0; }else{ //If marker isn't detected, count 30 frames and stop viewing the scene skipframe = skipframe + 1; container.setTransformMatrix(trans1); bre.renderScene(scene, camera, vp1); container.setTransformMatrix(trans2); bre.renderScene(scene, camera, vp2); if (skipframe>30){ //This was made to prevent spontaneous flickering of the scene in bad lighting conditions container.visible = false; container.setTransformMatrix(trans1); bre.renderScene(scene, camera, vp1); container.setTransformMatrix(trans2); bre.renderScene(scene, camera, vp2); } } } } }


Useful Links:

Use this online marker generator from TaroTaro to generate *.pat files for your markers:
http://flash.tarotaro.org/blog/2008/12/14/artoolkit-marker-generator-online-released/

Check these great tutorials to get into FLAR and Papervision3D:
http://www.mikkoh.com/blog/2008/12/flartoolkitflash-augmented-realitygetting-started/
http://adobe.edgeboss.net/download/adobe/adobetv/gotoandlearn/ar.mov

Get latest FLAR Toolkit here (use SVN):
http://www.libspark.org/svn/as3/FLARToolKit/trunk/

Get latest FLAR Toolkit alchemy branch here (use SVN):
http://www.libspark.org/svn/as3/FLARToolKit/branches/alchemy/

Get latest Papervision3D here (use SVN):
http://papervision3d.googlecode.com/svn/trunk




Developed by VRM team

Visit us at:
www.3DVRM.com

Contacts:
maxim.lysak@3dvrm.com viktor.kuropyatnik@3dvrm.com

Design © Kero 2011