User Guide for Multiple PlayStation Eye Cameras Configuration
From iPiSoft Wiki
- Computer (desktop or laptop):
- CPU: x86 compatible (Intel Pentium 4 or higher, AMD Athlon or higher), dual- or quad- core is preferable
- Operating system: Windows 8, 7, XP SP3, Vista (x86 or x64)
- USB: at least two USB 2.0 or USB 3.0 controllers
- For more info see USB controllers
- ExpressCard slot (for laptops)
- Optional, but highly recommended. It allows to install external USB controller in case of compatibility issues between cameras and built-in USB controllers, or if all USB ports are in fact connected to a single USB controller
- Storage system: HDD or SSD or RAID with write speed:
- For 4 cameras at 60 fps, 640 x 480 resolution: not less than 70,4 MByte/sec
- For 6 cameras at 60 fps, 640 x 480 resolution: not less than 105,6 MByte/sec
- Note If your write speed is lower, you can work with 320 x 240 resolution and/or lower frame rate. Alternatively, you can use compressed mode (that gives 3-5 times lower required write speed, but the CPU performance may become a bottleneck)
- 3 to 6 Sony PlayStation Eye for PS3 cameras.
- for more info see Cameras and accessories#Cameras
- 4 to 9 active USB 2.0 extension cables (depending on number of cameras and scene set-up)
- Optional: tripods to place cameras
- for more info see Cameras and accessories#Tripods
- Mini Maglite (or other flashlight) for calibration
- for more infor see Cameras and accessories#Mini Maglite (or other flashlight) for calibration
- Minimum required space: 4m by 4m (13 by 13 feet)
iPi Mocap Studio
- Computer (desktop or laptop):
- CPU: x86 compatible (Intel Pentium 4 or higher, AMD Athlon or higher), dual- or quad- core is preferable
- Operating system: Windows 8, 7, XP SP3, Vista (x86 or x64)
- Video card: Direct3D 10-capable (Shader Model 4.0) gaming-class graphics card
- Intel integrated graphics is not supported
- for more info see Cameras and accessories#Video_Card
- Select needed components
- Read and accept the license agreement by checking appropriate checkbox
- Press the Install button to begin installation
- Note. Most of the components require administrative privileges because they install device drivers or write to Program Files and other system folders. On Windows Vista/7 you will be presented with UAC prompts when appropriate during installation. If you plan to use iPi Recorder under user which has no administrative rights, you can pre-install other components separately using administrator's account.
- You can plug only one MS Kinect / ASUS Xtion / PrimeSense Carmine sensor to one USB controller. 1 USB controller bandwidth is not enough to record from 2 sensors.
- You can plug not more than 2 Sony PS Eye cameras to one USB controller, otherwise you will not be able to capture at 60 fps with 640 x 480 resolution.
- For more info see USB controllers.
Upon installation is complete, iPi Recorder will launch automatically. Continue with user's guide to get a knowledge of using the software.
If some of the components is already installed, it has no checkbox and is marked with ALREADY INSTALLED label. You should not install all optional components in advance, without necessity. All of them can be installed separately at later time. Components descriptions below contain corresponding download links.
- Microsoft .NET Framework 4 - Client. This is required component and cannot be unchecked.
This is basic infrastructure for running .NET programs. iPi Recorder is a .NET program.
- Web installer: http://www.microsoft.com/en-us/download/details.aspx?id=17113
- Standalone installer: http://www.microsoft.com/en-us/download/details.aspx?id=24872
- Playstation3 Eye Webcam :: WinUSB Drivers Registration. Check if you plan to work with Sony PS Eye cameras.
Device drivers for PS Eye camera.
- ASUS Xtion / PrimeSense Carmine :: OpenNI Redistributable and ASUS Xtion / PrimeSense Carmine :: PrimeSense Sensor. Check if you plan to work with ASUS Xtion, or ASUS Xtion Live, or PrimeSense Carmine depth sensors.
Device drivers and software libraries for ASUS Xtion / ASUS Xtion Live / PrimeSense Carmine.
- (Windows 7, 8) Microsoft Kinect :: MS Kinect SDK 1.5. Check if you plan to work with Microsoft Kinect depth sensors.
Device drivers and software libraries for Microsoft Kinect. Requires Windows 7 and later.
- (Windows XP, Vista) Microsoft Kinect :: PrimeSense psdrv3.sys Driver Registration. Check if you plan to work with Microsoft Kinect depth sensors.
Alternative device drivers for Microsoft Kinect.
- 64-bit OS: http://files.ipisoft.com/drivers/KinectPsdrv3_iPi-x64.msi
- 32-bit OS: http://files.ipisoft.com/drivers/KinectPsdrv3_iPi-x86.msi
- Note. iPi Recorder does support working with Kinect sensors on Windows 7 using PrimeSense driver. If you have installed and used it for Kinect with iPi Recorder 1.x, you can continue using it with iPi Recorder 2.
- iPi Recorder 2.x.x.x. This is required component and cannot be unchecked.
iPi Recorder itself.
iPi Mocap Studio
- Read and accept the license agreement by checking corresponding checkbox.
- Press the Install button to begin installation.
Upon installation is complete, iPi Mocap Studio will launch automatically.
All components are required for installation. Please note that the installation of Microsoft .NET Framework 3.5 SP1 requires an Internet connection. If needed, you can download offline installer for Microsoft .NET separately, and run it before iPi Mocap Studio setup. Other components are included with iPi Mocap Studio setup.
For more info about license protection see License.
Recording Video with Multiple PlayStation Eye Cameras
For a multiple PlayStation Eye configuration, you need a minimum of 13 feet by 13 feet space (4 meters by 4 meters). At smaller space, actor simply won’t fit into view of cameras.
For 640 by 480 camera resolution, capture area can be as big as 20 feet by 20 feet (7 meters by 7 meters). That should be enough for capturing motions like running, dancing etc.
Light-color background (light walls and light floor) is recommended for markerless motion capture. iPi Desktop Motion Capture is designed to work with real-life backgrounds. A multi-camera configuration (3 cameras and up) can handle certain amount of background clutter. Please keep in mind that the system can be confused if your background has large objects of the same color as actor clothes.
Using a green or a blue backdrop may improve results, but you are not required to use a backdrop if you have reasonable office or home environment with light-color walls and bright lighting.
For best results, your environment should have multiple light sources for uniform, ambient lighting. Typical office lighting with multiple light sources located on ceiling should be quite suitable for markeless motion capture. In a home environment, you may need to use additional light sources to achieve more uniform lighting.
Please note that the system cannot work in direct sunlight. If you plan a motion capture session outdoors you should choose a cloudy, overcast day.
Actor should be dressed in solid-color long-sleeve shirt, solid-color trousers (or jeans) and solid-color shoes. Deep, saturated colors are preferable. Casual clothes like jeans should be OK for use with markerless mocap system. iPi Desktop Motion Capture uses clothing color for separating actor from background and therefore cannot work with totally arbitrary clothing.
Recommended shirt (torso) colors are black, blue or green. Red is not recommended because red can blend with human skin color making it difficult for the system to see hands placed over torso. Black color is useful for reducing self-shadows on torso. If you have bright uniform lighting you can get better results with a primary-color (blue or green) shirt.
Recommended jeans/trousers color is blue.
Recommended shoe color is black.
iPi Desktop Motion Capture has an option of using T-shirt over long-sleeve shirt for actor clothing. However, simple long-sleeve shirt may result in more accurate motion capture.
Don’t stand too close to a wall. Shadows on the wall may confuse the system. If standing close to a wall is unavoidable you should use additional light sources to light the wall behind actor to minimize shadows.
System can have problems tracking push-ups and similar motions because of shadows on the floor. You can improve tracking of such motions by using additional light sources to light the floor.
Please record a video using iPi Recorder application. It supports recording with Sony PlayStation Eye cameras, depth sensors (Kinect) and DirectShow-compatible webcams (USB and FireWire).
iPi Recorder is a stand-alone application and does not require a powerful video card. You may choose to install it on a notebook PC for portability. Since it is free, you can install it on as many computers as you need.
Please run iPi Recorder and complete setup and background recording steps following the instructions:
It is recommended that you record all videos at maximum available framerate. High framerate helps reduce motion blur and capture fine details of the motion.
Maximum possible framerate for Sony PlayStation Eye camera is 60 frames per second. Sony advertises PlayStation Eye camera as capable of capturing at 120 frames per second but framerates over 60 FPS result in too much noise in PlayStation Eye camera sensor and are not usable for motion capture.
Framerate lower than 30 frames per second is not recommended for motion capture.
4 cameras at 320 by 240 resolution
A dual-core CPU should be fast enough for recording a 4-camera video at 320 by 240 resolution at 60 frames per second.
4 cameras at 640 by 480 resolution at 60 frames per second
A quad-core CPU is recommended for recording at 640 by 480 resolution at 60 frames per second. If you have a dual-core CPU you may need to configure a lower framerate and/or lower compression quality to be able to record video at 640 by 480.
6 cameras at 640 by 480 resolution at 60 frames per second
A quad-core CPU clocked at 2.0 GHz (or better) is recommended for recording at 640 by 480 resolution at 60 frames per second. You will also need to get additional USB controller.
All modern computers (e.g. dual-core and better) based on Intel, AMD and Nvidia chipsets have two high-speed USB (USB 2.0) controllers on board. That should give you enough bandwidth to be able to record with 4 cameras at 640x480 (raw Bayer format) at 60 FPS, or 6 cameras at 640x480 (raw Bayer format) at 40 FPS.
Under certain circumstances you may need to get additional USB controllers.
Three Camera Configuration
Recommended configuration for 3-camera setup is a half-circle:
Virtual view of the same scene:
Four Camera Configuration
You can set up 4 cameras in a half-circle or a full-circle configuration, depending on available space. You can improve accuracy by placing one of the cameras high over the ground (like 3 meters high).
Recommended configuration for 4-camera setup in half circle:
Six Camera Configuration
You can set up 6 cameras in a full-circle or a half-circle configuration, depending on available space. You can improve accuracy by placing one or two cameras high over the ground (like 3 meters high).
Recommended configuration for 6-camera full-circle setup:
Install the cameras on tripods and connect cables.
Sony PlayStation Eye cameras do not have standard tripod mounting screw, so you will have to use some kind of ad hoc solution. The simplest approach is to fix the cameras to tripods using sticky tape.
When mixing active and passive USB cables, make sure cable connection order is correct (computer->active cable->passive cable->camera).
If you're using the PlayStation Eye camera, make sure you have the lens set to the wide setting.
Calibration is the process of computing accurate camera positions and orientations from a video of user waving a small glowing object (called “marker”). This step is essential and required for multi-camera system setup.
Important. Once you calibrated the camera system, you should not move your cameras for subsequent video shoots. If you move at least one camera, you need to perform calibration again.
Importance of high frame rate
You should record calibration video at the same resolution as your action video and at the same (or higher) frame rate.
Calibration at a different resolution may lead to reduced accuracy because cameras usually have different minor distortions at different resolutions (caused by internal scaling algorithm).
Calibration at low frame rate may lead to reduced accuracy because of increased synchronization errors.
Mini Maglite flashlight is recommended for calibration. This is a very common flashlight in US and many other countries. Removing flashlight reflector converts it into an ideal glowing marker easily detectable by motion capture software.
If you cannot get a Mini Maglite, you can use some other similar flashlight.
Step 1: Running iPi Recorder in calibration mode
Run iPi Recorder and choose one of the darkening modes in "darkening for calibration" list (for Sony PS Eye cameras)
or set Exposure to reasonably small value (for DirectShow-compatible web cameras)
This is important because it helps to reduce motion blur during calibration.
Video will look dim in calibration mode.
- Important! Do not turn off the light in the room during calibration! This will not help the software but will make it harder for you to see what is happening on recorded video when you view it later.
Step 2: Record calibration video
Start video recording.
Move the marker slowly through your entire capture volume (front-top-right-bottom-left-back-top-right-bottom-left). Start from top and move the marker in a descending spiral motion.
Tip. The exact trajectory of the marker is not so important, just try to cover the whole capture volume, or at least its perimeter.
Tip. You should make the marker visible to as much cameras as possible at all times. Hold the marker in the straight arm away from your body. In a circle configuration, when approaching the boundary of the capture area, position the marker inside the area, and your body outside.
Put the marker to the ground at each corner and at the center of capture volume. At least 4-5 ground points are needed for the correct detection of the groundplane.
Step 3: stop recording and check recorded video
- There is no significant motion blur (image of marker looks like a round spot rather than an ellipse or a luminescent line)
- Most of the time marker is visible in at least 3 cameras and not completely obscured by human body
Step 4: Take note of height of your first camera over the ground.
Take note of height of your first camera over the ground. You will need this parameter later. If you cannot measure this height accurately, then at least make a rough estimation.
Step 5: process calibration video in iPi Mocap Studio
Strictly speaking, you can postpone processing calibration video until after you finished recording other videos (e.g. your action videos). However, it is a good idea to process your calibration video as soon as it was recorded because it helps you ensure that you have good calibration. (Later on, incorrectly recorded calibration video may affect your ability to process action videos. )
To process calibration video please do the following:
- Load your calibration video to iPi Mocap Studio
- Important. Adjust the Region of Interest to cover the part of video that contains the glowing marker.
- Set the Diagonal Field of View (FOV) for your cameras on the “Scene” tab. If you use Sony PlayStation Eye or Logitech QuickCam 9000 cameras, leave the FOV value at the default 75 degrees.
- Go to “Calibration” tab. Check “Auto detect initial camera positions” checkbox. Click “Calibrate” button and wait while the system finishes calibration.
- Calibration algorithm may occasionally fail to find correct camera positions. If this happens, you should manually adjust initial camera positions to roughly match your configuration. The main thing is the correct order of the cameras around the capture area, and their approximate view directions.
- Reset camera positions with one of the standard half-circle or full circle configurations that best suits your configuration.
- For each camera that requires adjusting, switch to this camera using the toolbar button, and correct its position using controls on the “Scene” tab.
- Uncheck “Auto detect initial camera positions” checkbox.
- Click “Calibrate” button. This will rerun calibration process, without recomputing marker positions.
Follow video tutorial:
Resulting scene should look like this:
Green points designate correctly detected 3D marker positions. Red points designate misdetected marker positions. 10-20% of red points should be considered normal. Calibration is good if you have at least 70% of green points.
Tip. When doing manual camera positioning before the calibration, you may find useful to match model's position with the actor on a video:
- Select the frame where the actor is standing relatively straight.
- Show the model by checking the "View > Skin" menu item.
- Adjust the camera position to fit the model with the actor's image.
Step 6: Define ground plane
You need at least 3 points in 3D space to define ground plane. For each point, click on it in 3D view and press “Mark as ground” button.
WARNING: If you do not mark ground points then the ground plane is incorrect and there is no sense in using Foot tracking option and camera heights values.
Step 7: Set scene scale using camera height as reference
Now cameras in your scene are properly oriented relative to other cameras and relative to ground plane. But you still need to find one more parameter: scene scale.
Use camera #1 height over ground to set correct scene scale.
Note: Height of camera can only be used if ground plane is properly defined. If ground plane is not defined, you can use distance between camera #1 and #2 to set scene scale.
Step 8: Save calibration result into *.scene.xml file
Recording Actor's Performance
Recommended layout of an action video
- Enter the actor.
- Strike a T-pose.
It is preferable to have actor strike a “T-pose” before the actual action. The software will need T-pose for building actor appearance model during tracking. If you make several takes of one actor you do not need to re-record T-pose before each take.
When using the depth sensors, it is recommended to face the palms down, as it corresponds to the default orientation of the model's hand bones. When using color cameras, it is recommended to face the palms forward, as it helps the software in determining the right color for the model's hands.
Take is a concept originating from cinematography. In a nutshell, take is a single continuous recorded performance.
Usually it is a good idea to record multiple takes of the same motion, because a lot of things can go wrong for purely artistic reasons.
A common problem with motion capture is “clipping” in resulting 3D character animation. For example, arms entering the body of animated computer-generated character. Many CG characters have various items and attachments like a bullet-proof vest, a fantasy armor or a helmet. It can be easy for an actor to forget about the shape of the CG model.
For this reason, you may need to schedule more than one motion capture session for the same motions. Recommended approach is:
- Record the videos
- Process the videos in iPiStudio
- Import your target character into iPiStudio and review the resulting animation
- Give feedback to the actor
- Schedule another motion capture session if needed
Ian Chisholm's hints on motion capture
Ian Chisholm is a machinima director and actor and the creator of critically acclaimed Clear Skies machinima series. Below are some hints from his motion capture guide based on his experience with motion capture for Clear Skies III.
Three handy hints for acting out mocap:
- Don’t weave and bob around like you’re in a normal conversation – it looks terrible when finally onscreen. You need to be fairly (but not completely) static when acting.
- If you are recording several lines in one go, make sure you have lead in and lead out between each one, i.e. stand still! Otherwise, the motions blend into each other and it’s hard to pick a start and end point for each take.
- Stand a bit like a gorilla – have your arms out from your sides:
Well, obviously not quite that much. But anyway, if you don’t, you’ll find the arms clip slightly into the models and they look daft.
If you have a lot of capture to do, you need to strike a balance between short and long recordings. Aim for 30 seconds to 2 minutes. Too long is a pain to work on later due to the fiddlyness of setting up takes, and too short means you are forever setting up T-poses.
Because motion capture is not a perfect art, and neither is acting, it’s best to perform multiple takes. I found that three was the best amount for most motion capture. Take less if it’s a basic move, take more if it’s complex and needs to be more accurate. It will make life easier for you in the processing stage if you signal the break between takes – I did this by reaching out one arm and holding up fingers to show which take it was.
As it’s the same actor looking exactly the same each and every time, and there is no sound, and the capture is in lowres 320*200, you really need to name the files very clearly so that you later know which act, scene, character, and line(s) the capture is for.
My naming convention was based on act, scene, character, page number of the scene, line number, and take number. You end up with something unpleasant to read like A3S1_JR_P2_L41_t3 but it’s essential when you’ve got 1500 actions to record.
Processing Video from Multiple PlayStation Eye Cameras
Open multi-camera video in iPi Mocap Studio. Load scene configuration from file.
Adjust the Region of Interest. (Region of Interest should cover the part of video that contains the motion).
Go to T-pose frame. Using character move/rotate controls, roughly align the model with the image of actor in one view. Adjust character height and proportions (on “Actor” tab) to correspond to actor. Push the “Analyze Appearance” button on “Actor” tab to automatically adjust model's colors. For the multi-camera video, “Analyze Appearance” button uses only one (current) camera. This makes it easier to align model with image. You can refine actor appearance by alternating “Refit Pose” and “Analyze Appearance”. Also you can edit the colors manually or with the eyedropper tool.
Go to the first frame of Region of Interest. Push the “Refit Pose” button on “Tracking” tab. If initial pose was recognized incorrectly, you can roughly adjust it manually and use auto-fit again.
Push the “Track Forward” button on “Tracking” tab.
You can save your motion capture project to a file. If your project is saved to a file, then the system will auto-save it after each processed frame during tracking.
Refer to video tutorial for tips on video processing:
Once initial tracking is performed on all (or part) of your video, you can begin cleaning out tracking errors (if any). Post-processing should be applied after clean-up.
Cleaning up tracking gaps
Tracking errors usually happen in a few specific video frames and propagate to multiple subsequent frames, resulting in tracking gaps. Examples of problematic frames:
- Occlusion (like one hand not visible in any of the cameras)
- Indistinctive pose (like hands folded on chest).
- Very fast motion with motion blur.
To clean up a sequence of incorrect frames (a tracking gap), you should use backward tracking:
- Go toward the last frame of tracking gap, to a frame where actor pose is distinctive (no occlusion, no motion blur etc.).
- If necessary, use Rotate, Move and Inverse Kinematics tools to edit character pose to match actor pose on video.
- Turn off Trajectory Filtering (set it to zero) so that it does not interfere with your editing.
- Click Refit Pose button to get a better fit of character pose.
- Click Track Backward button.
- Stop backward tracking as soon as it comes close to the nearest good frame.
- If necessary, go back to remaining parts of tracking gap and use forward and backward tracking to clean them up.
Cleaning up individual frames
To clean up individual frames you should use a combination of editing tools (Rotate, Move and Inverse Kinematics) and Refit Pose button.
Note: after “Refit Pose” operation iPiStudio automatically applies Trajectory Filtering to produce a smooth transition between frames. As the result, pose in current frame is affected by nearby frames. This may look confusing. If you want to see exact result of “Refit Pose” operation in current frame you should turn off Trajectory Filtering (set it to zero), but do not forget to change it back to suitable value later.
Tracking errors that cannot be cleaned up using iPi Studio
Not all tracking errors can be cleaned up in iPiStudio using automatic tracking and Refit Pose button.
- Frames immediately affected by occlusion sometimes cannot be corrected. Recommended workarounds:
- Manually edit problematic poses (not using Refit Pose button).
- Record a new video of the motion and try to minimize occlusion.
- Record a new video of the motion using more cameras.
- Frames immediately affected by motion blur sometimes cannot be corrected. Recommended workarounds:
- Manually edit problematic poses (not using Refit Pose button).
- Edit problematic poses in some external animation editor.
- Record a new video of the motion using higher framerate.
- Frames affected by strong shadows on the floor sometimes cannot be corrected. Typical example is push-ups. This is a limitation of current version of markerless mocap technology. iPiSoft is working to improve tracking in future versions of iPiStudio.
- Some other poses can be recognized incorrectly by iPiStudio. This is a limitation of current version of markerless mocap technology. iPiSoft is working to improve tracking in future versions of iPiStudio.
After the primary tracking and cleanup are complete, you can optionally run the Refine pass (see Refine Forward and Refine Backward buttons). It slightly improves accuracy of pose matching, and can automatically correct minor tracking errors. However, it takes a bit more time than the primary tracking, so it is not recommended for quick-and-dirty tests.
Important. Refine should be applied with the same tracking parameters (e.g. feet tracking, head tracking) as the primary tracking in order not to lose previously tracked data.
Important. Refine should be applied before motion controller data. Also, if you plan to manually edit the animation (not related to automatic cleanup with Refit Pose), then also do this after applying Refine.
In contrast to the primary tracking, this pass does no pose prediction, and bases its computations solely on the current pose in a frame. Essentially, running Refine is equal to automatically applying Refit Pose to a range of frames which were previously tracked.
Post-processing: Jitter Removal
Jitter Removal filter is a powerful post-processing filter. It should be applied after cleaning up tracking gaps and errors. It is recommended that you always apply Jitter Removal filter before exporting animation.
Jitter Removal filter suppresses unwanted noise and at the same time preserves sharp, dynamic motions. By design, this filter should be applied to relatively large segments of animation (no less than 50 frames).
Range of frames affected by Jitter Removal is controlled by current Region of Interest.
You can configure Jitter Removal options for specific body parts. Default setting for Jitter Removal “aggressiveness” is 1 (one tick of corresponding slider). Oftentimes, you can get better results by applying a slightly more aggressive Jitter Removal for torso and legs. Alternatively, you may want to use less aggressive Jitter Removal settings for sharp motions like martial arts moves.
Jitter Removal filter makes an internal backup of all data produced by tracking and clean up stages. Therefore, you can re-apply Jitter Removal multiple times. Each subsequent run works off original tracking/clean-up results and overrides previous runs.
Post-processing: Trajectory Filtering
Trajectory Filter is a traditional digital signal filter. Its purpose is to filter out minor noise that remains after Jitter Removal filter.
Trajectory Filter is very fast. It is applied on-the-fly to current Region of Interest.
Default setting for Trajectory Filter is 1. Higher settings result in multiple passes of Trajectory Filter. It is recommended that you leave it at the default setting.
Trajectory Filter can be useful for “gluing” together multiple segments of animation processed with different Jitter Removal options: change the Region of Interest to cover all of your motion (e.g. multiple segments processed with different jitter removal setting); change Trajectory Filtering setting to 0 (zero); then change it back to 1 (or other suitable value).
Export and Motion Transfer
Use “File->Export Animation” menu item to export all animation frames from within Region of Interest.
To export animation for specific take, right-click on take and select “Export Animation” item from pop-up menu.
Default iPi Character Rig
The default skeleton in iPi Studio is optimized for markerless motion capture. It may or may not be suitable as a skeleton for your character. Default iPi skeleton in T-pose has non-zero rotations for all joints. Please note that default iPi skeleton with zero rotations does not represent a meaningful pose and looks like a random pile of bones.
By default iPi Studio exports a T-pose (or a reasonable default pose for custom rig after motion transfer) in the first frame of animation. In case when it is not desired switch off "Export T-pose in first frame" checkbox.
Motion Transfer and Custom Rigs
iPi Studio has integrated motion transfer technology. You can import your character into iPi Studio via “File->Import Target Character” menu item and your motion will be transferred to your character. You may need to assign bone mappings on the “Export” tab for motion transfer to work correctly. You can save your motion transfer profile to XML file for future use. iPi Studio has pre-configured motion transfer profiles for many popular rigs (see below). If you export animation to format different from format your target character was imported in, only rig will be exported. If you use the same format for export, skin will be exported as well.
Use the “Export Animation for MotionBuilder” menu item to export your motion in MotionBuilder-friendly BVH format. MotionBuilder-friendly skeleton in T-pose has zero rotations for all joints, with bone names consistent with MotionBuilder conventions. This format may also be convenient for use with other apps like Blender.
3D MAX Biped
Use the “Export Animation for 3D MAX” menu item to export your motion in 3D MAX-friendly BVH format.
Create a Biped character in 3D MAX (“Create->Systems->Biped”). Go to “Motion” tab. Click “Motion Capture” button and import your BVH file.
Our user Cra0kalo created an example Valve Biped rig for use with 3D MAX. It may be useful if you work with Valve Source Engine characters.
Latest versions of Maya (starting with Maya 2011) have a powerful biped animation subsystem called "HumanIK". Animations exported from iPiStudio in MotionBuilder-friendly format (the “Export Animation for MotionBuilder” menu item) should work fine with Maya 2011 and HumanIK. The following video tutorials can be helpful:
- Maya HumanIK Mocap retarget with iPi Mocap Studio, by Wes McDermott
- Non-Destructive Live Retargeting — Maya 2011 New Features
- Motion Capture Workflow With Maya 2011
- Humanik Maya 2012 Part 6
For older versions of Maya please see the #Motion Transfer and Custom Rigs section. Recommended format for import/export with older versions of Maya is FBX.
iPi Studio supports FBX format for import/export of animations and characters. By default, iPiStudio exports animations in FBX 6.0 format using FBX SDK 2012. If your target character is in FBX 7.0 or newer format, iPiStudio will export retargeted animation in FBX 2012 format.
Some applications do not use latest FBX SDK and may have problems importing newer version FBX files. In case of problems, your can use Autodesk's free FBX Converter to convert your animation file to appropriate FBX version.
iPi Studio supports COLLADA format for import/export of animations and characters. Current version of iPi Studio exports COLLADA animations as matrices. In case if you encounter incompatibilities with other applications' implementation of COLLADA format, we recommend using Autodesk's free FBX Converter to convert your data between FBX and COLLADA formats. FBX is known to be more universally supported in many 3D graphics packages.
Recommended format for importing target characters from LightWave to iPi Studio is FBX. Recommended format for bringing animations from iPi Studio to LightWave is BVH or FBX.
Our user Eric Cosky published a tutorial on using iPiStudio with SoftImage|XSI:
Export your poser character in T-pose in BVH format (File->Export). Import your Poser character skeleton into iPi Studio (File->Import Target Character). Your animation will be transferred to your Poser character. Now you can use File->Export Animation to export your animation in BVH format for Poser.
Poser 8 has a bug with incorrect wrists animation import. The bug can be reproduced as follows: export Poser 8 character in T-pose in BVH format; import your character back into Poser 8; note how wrists are twisted unnaturally as the result.
A workaround for wrists bug is to chop off wrists from your Poser 8 skeleton (for instance using BVHacker) before importing Poser 8 target character into iPi Studio. Missing wrists should not cause any problems during motion transfer in iPi Studio if your BVH file is edited correctly. Poser will ignore missing wrists when importing resulting motion so the resulting motion will look right in Poser (wrists in default pose as expected).
The workflow for DAZ 3D is very similar to Poser. Import your DAZ 3D character skeleton into iPi Studio (File->Import Target Character). Your animation will be transferred to your DAZ 3D character. Now you can use File->Export Animation to export your animation in BVH format for DAZ 3D.
IMPORTANT: You can use DAZ character in COLLADA (.dae) format for preview, but it is strongly recommended that you use DAZ character in BVH format for motion transfer. DAZ3D has a problem with COLLADA (.dae) format: DAZ3D Studio does not export all bones into COLLADA (.dae). In particular, the following bones are not exported: eyeBrow, bodyMorphs. DAZ3D Studio does not use bone names when importing motions; instead, DAZ3D Studio just takes rotations from the list of angles as though it was a flat list with exactly the same positions as in DAZ3D internal skeleton. As the result, when you transfer the motion to a COLLADA character and import it back into DAZ3D, the motion will look wrong. iPiStudio displays a warning about this. To avoid this problem, import your DAZ target character in BVH format - DAZ3D Studio is known to export characters in BVH format correctly (with all bones).
You can improve accuracy of motion transfer by doing some additional preparation of your DAZ 3D skeleton in BVH format. For DAZ 3D Michael 4.0 and similar characters, you may need to clamp thigh joint rotation to zero to avoid unnatural leg bending. For DAZ 3D Victoria 4.0, you may need to adjust foot joint rotation to change the default “high heels“ foot pose to a more natural foot pose.
Current version of iPi Studio can only export animation in iClone-compatible BVH format. The iMotion format is not supported as of yet. That means you will need iClone PRO to be able to import the motion into iClone. Standard and EX versions of iClone do not have BVH Converter and therefore cannot import BVH files.
Workflow for iClone is straightforward. Export your animation using “Export Animation for iClone” menu item. Go to Animation tab in iClone and launch BVH Converter. Import your BVH file with Default profile, click “Convert” and save the resulting animation in iMotion format. Now your animation can be applied to iClone characters.
iClone expects an animation sampled at 15 frames per seconds. For other frame rates, you may need to create a custom BVH Converter profile by copying Default profile and editing “Frame Rate” setting.
BVH Converted in iClone 4 has a bug that causes distortion of legs animation. iPi Studio exports an iClone-optimized BVH correctly as can be verified by reviewing exported BVH motion in BVHacker or MotionBuilder or other third-party application. No workaround is known. We recommend that you contact iClone developers about this bug as it is out of control of iPi Soft.
Valve Source Engine SMD
Import .smd file for your Valve Source Engine character into iPi Studio via “File->Import Target Character” menu item. Your animation will be transferred to your character. Now you can use File->Export Animation to export your animation in Valve Source Engine SMD format.
Our user Cra0kalo created an example Valve Biped rig for use with 3D MAX. It may be useful if you wish to apply more then one capture through MotionBuilder or edit the custom keyframes in MAX.
Valve Source Filmmaker
First, you need to import your character (or its skeleton) into iPi Mocap Studio, for motion transfer.
There are currently 3 ways of doing this:
- You can import an animation DMX (in default pose) into iPi Mocap Studio. Since it has a skeleton, it should be enough for motion transfer. To create an animation DMX with default pose, you can add your character to your scene in Source Filmmaker and export DMX for corresponding animation node:
- open "Animation Set Editor Tab";
- click "+" -> "Create Animation Set for New Model";
- choose a model and click "Open";
- export animation for your model, in ASCII DMX format;
- There is a checkbox named Ascii in the top area of the export dialog.
- Alternatively, you can just import an SMD file with your character into iPi Mocap Studio. For example, SMD files for all Team Fortress 2 characters can be found in your SDK in a location similar to the following (you need to have Source SDK installed): C:\Program Files (x86)\Steam\steamapps\<your steam name>\sourcesdk_content\tf\modelsrc\player\pyro\parts\smd\pyro_model.smd).
- If you created a custom character in Maya, you should be able to export it in DMX model fromat. (Please see Valve documentation on how to do this).
Then you can import your model DMX into iPi Mocap Studio. Current version of iPi Mocap Studio cannot display character skin, but it should display the skeleton. Skeleton should be enough for motion transfer.
To export animation in DMX, you should just use "General..." export menu item in iPi Mocap Studio and choose DMX from the list of supported formats. You may also want to uncheck "Export T-pose in first frame" option on the "Export" tab in iPi Mocap Studio.
Now you can import your animation into Source Filmmaker. There will be some warnings about missing channels for face bones but you can safely ignore them.
Old way involving Maya
This was used until iPi Mocap Studio got DMX support. And still may be useful in case of any troubles with DMX. Please see the following video tutorial series:
iPiStudio can export animations in Blender-friendly BVH format (File->Export animation for Blender).
If you have experience with Cinema4D please help to expand this Wiki by posting Cinema4D import/export tips to Community Tutorials section of our user forum.
iPi Studio supports importing of skinned Evolver characters in COLLADA or FBX format. Import your Evolver character skeleton into iPi Studio (File->Import Target Character). Your animation will be retargeted to your Evolver character. Now you can use File->Export Animation to export your animation.
Evolver offers several different skeletons for Evolver characters. Here is an example motion transfer profile for Evolver "Gaming" skeleton: evolver_game.profile.xml
Import your Second Life character skeleton into iPi Studio (File->Import Target Character). Your animation will be transferred to your Second Life character. Now you can use File->Export Animation to export your animation in BVH format for Second Life.
See the discussion on our Forum for additional details: http://www.ipisoft.com/forum/viewtopic.php?f=2&p=7845
Please see our user forum for a discussion of animation import/export for Massive:
Please see the following video tutorial on how to use iPi Studio with IKinema WebAnimate:
Please see the following video tutorial on how to use iPi Studio with Jimmy|Rig Pro:
Potential problem: after installation, iPi Mocap Studio crashes on first start.
Possible cause: very often, this is caused by incompatible video card. Another possible reason is broken .NET Framework installation or broken DirectX installation.
Solution: check system requirements and make sure your operating system and .NET Framework is up to date.
Two Kinects don't work together
Potential problem: two Kinect sensors do not work togethter.
Possible cause: Most probably, both Kinects were plugged into one USB controller. In this case 1 USB controller bandwidth is not enough to handle video from 2 Kinects.
Solution: Each Kinect should be plugged into separate USB controller. Please refer to documentation User_Guide_for_Dual_Kinect_Sensor_Configuration#Software_Installation
How to report bugs and issues
When reporting bugs and issues, please specify the following info:
- exact version of your operating system;
- exact model of your video card (you can use GPU-Z to find out the model of your video card);
- the number and models of your cameras.
You can post your bug reports on our User Forum or send them to iPiSoft tech support email.
How to send a video to iPiSoft tech support
Sending your videos to iPiSoft tech support can be helpful if you experience a problem with iPiSoft's system. iPiSoft promises to use your video only for debugging and not to disclose it to third parties.
To send a video, please upload it to some file sharing server like filefactory.com and send us the link.
If you cannot send a video because of its huge size, consider sending screenshots. Screenshots are less informative then video but still they are helpful for diagnosing various problems with tracking.
For video materials please refer to our Gallery