Amid much speculation and anticipation the Kinect for Xbox 360 was launched in November 2010 as a natural user interface for the Xbox 360 console. The Kinect – described by Microsoft as a ‘game changer’ -had sales figures surpassing any expectation, it sold over 8 million units in the first 60 days and briefly held the Guinness World Record for the “Fastest selling consumer electronics device” (surpassed by the ipad2). As of January 2012, over 18 million units have been shipped across the world.
the low price point and technological capabilities of the device attracted the hacking and research community. Adafruit Industries offered a $3k prize for the first person to hack the Kinect and gain access to its cameras. The hack only took 6 days; Hector Martin quickly developed an open GL application to display images from the Kinects depth and RGB streams.
Over the forthcoming weeks and months numerous additional hacks were made on the Kinect as hackers and developers began investigating its “Virtually limitless possibilities”. The Kinect holds potential in numerous application areas, a simple application programming interface (API) was needed to allow developers to let their imagination run wild without getting caught up in clunky hardware access routines.
One of the rules of the original $3k Adafruit prize was that any source code was made open. On the 10th November 2010 Hector Martin realased his Kinect source code, known as ‘libfreenect’. This initial release corresponded with the formation of ‘OpenKinect’, a community of developers working on developing open source code for the Kinect.
On 30th November 2010, Nicolas Burrus released ‘RGB Demo’, based upon the libfreenect back end drivers. The aim was to provide a simple open source toolkit to access Kinect data and allow the development of standalone computer vision applications. As libfreenect is unoffical it is unable to access on-board camera calibration data, in some applications accuracy may be compromised. For this reason burrus added the ability to calibrate the Kinect through the use of a conventional checkerboard image.
Approximately one month after the initial hack, Primesense (the developer of the depth camera technology behind the Kinect) decided to release its drivers and framework API to permit further development of the Kinect. At the same time they announced the release of OpenNI, an initiative to promote and develop interoperable natural interaction applications. At the same time PrimeSense also released binary versions of NITE, their skeleton tracking algorithm that can be used with any devices compatible with the OpenNI framework.
This was the final push that was required to further develop appications for the Kinect, with an easy to use API, an accurate skeleton tracking algorithm and inherent intrinsic camera calibration, soon numerous applications were developed in disciplines from robotics to healthcare and shopfitting to teaching.
Microsoft “didn’t know what they didn’t know about the Kinect” and were immensely surprised by the amount of interest the Kinect had attracted. Microsoft responded to the development community on 16th June 2011 by releasing the Kinect SDK Beta v1. This SDK was free for non commercial use and offered access to the Kinects RGB, depth and audio streams. Also included was simple skeleton tracking and numerous sample programs. On the 3rd November 2011 the Beta v2 of the Kinect SDK was released with various performance enhancements and improved skeleton tracking.
Still missing however was an SDK or API that could be used for commerial purposes. With the release of Microsofts Beta SDK v2 however came a promise to provide a commercial implementation in early 2012.
February 1st 2012 saw the launch of the official Microsoft Kinect SDK v1 offering various improvements over the beta v2 release. The SDK was offered free of charge and with the ability to used in commercial applications. At the same time the Kinect for Windows sensor was released, visually the same as the Xbox Kinect but with alternative branding. The Kinect for Windows however includes the ability to operate using ‘near mode’, permitting depth calculation as close as 400 mm.
If you’re beginning developing with the Kinect the choice of driver and SDK/API is an important decision to make and depends uponwhether your project is for commercial or non-commercial purposes. Our getting started guide provides a great overview of the drivers discussed above and their advantages/disadvantages.