Thursday, March 31, 2011

Rig

Guys, its time to start work on rig. If we follow the original design, we will need the following hardware

-2 laptops
-2 webcams (not attached to laptops)
-microphone
-glass panel
-wood
-external speaker system

I've currently got an abundance of wood (along with tools for wood working) so that's not an issue, but will need to obtain glass panel and webcams. If anyone has the above let me know.

Will start working wood as soon as we obtain glass panel (will need to know what glass measurements we are working with), so keep me posted.

Cheers

Monday, March 28, 2011

v10, Batch File, Rig

Hey guys, the version we've all been working on in class has been put up on Dropbox (v10).



v10
As a small summary for the sake of documentation, the issues with random in v9 has been sorted my moving random to the setup class. The issue we had, with random not being equal to the song being played was all down to the fact that since random was in a class (addTUIO) which is constantly looping, random was constantly changing its value.

Now it will be called once in setup class, as remain as such until guess = random number (in which case, setup() and draw() classes are reinitialized.

The main issue is still load time, which is now even more vital since we are calling setup() class over and over. As a start we have tried mitigating issues by removing sample size declarations. By calculations, reserving 2048 kb for each sample buffer was calling access memory which was not being used.

The next target (v11) should deal with program efficiency and cleaning up code, so feel free to make changes and update the blog once said changes have been made.



Also, well done for today's presentation, 85% is something to be proud of, however apologies on my part for freezing in the middle of it all, presentations are evidentially not my thing ;)



Batch Code
Also, batch code for launching reacTIVision and program may be found below. If you would like a copy of this, simply copy into notepad, and alter according to where your files are stored.

You will need to export the code into windows executable, (which will create a new folder in your directory), place contents of this folder into your home directory, otherwise images and sounds will not load.

:BEGIN
CLS
title Orchestrate
START C:\Users\Kevin2\Desktop\reacTIVision-1.4\reacTIVision.exe
START C:\Users\Kevin2\Desktop\reacTIVision\audio\mplayer1\mplayer1.exe

:END

This bat file has also been included in the v9 rar file on dropbox, along with a new icon for the bat file =)



Rig
Also, we will need to meet up asap to start work on the physical rig, as was highlighted today, we will need to allocate a lot of time for this, since there is a lot of trial and error involved in the distances. I've got copious amounts of wood at my place, along with woodworking tools, all we need to find is a piece of glass (30cm by 60cm?) for the table top, may get that sorted through basement refurbishment also, more info when it comes. =)



creating batch file

hey guys,
as things stand, to run our code, we will have to manually open reacTIVision console, and Processing (along with compiling the processing code).

Currently looking into creating a basic bat file which will do all the work for us, speeding up the boot process, and simplifying our lives on presentation day.

More info once developed, but here's the link I'll be following for bat creation: http://www.computerhope.com/batch.htm#windows

New Logo Design

What do you think about this new logo design...

D

Sunday, March 27, 2011

Summary of Content

PXV-ishii Tangible Bits: Beyond Pixels


This paper discusses a model of TUI, key properties, genres, applications, and summarizes the contributions made by the Tangible Media Group and other researchers since the publication of the first Tangible Bits



  • Tangible User Interfaces (TUIs) aim to take advantage of haptic interaction skills, which is a significantly different approach from GUI


  • TUI makes digital information directly manipulatable with our hands


  • Urp (Urban Planning Workbench)


  • In Urp, physical models of buildings are used as tangible representations for digital models of the buildings. To change the location and orientation of buildings, users simply grab and move the physical model as opposed to pointing and dragging a graphical representation on a screen with a mouse. The physical forms of Urp's building models, and the information associated with their position and orientation upon the workbench represent and control the state of the urban simulation.


  • The physical artifacts also serve as controls for the underlying computational simulation (specifying the locations of objects). The specific physical embodiment allows a dual use in representing the digital model and allowing control of thedigital representation.



















GUI The figure illustrates the current GUI paradigm in which generic input devices allow users to remotely interact with digital information. Using the metaphor of a seashore that separates the sea of bits from the land of atoms, the digital information is illustrated at the bottom of the water, and the mouse and screen are above sea level in the physical domain. Users interact with the remote controls, and ultimately experience an intangible, external representation of digital information (display pixels and sound).



TUI Tangible User Interface aims at a different direction from GUI by using tangible representations of information that also serve as the direct control mechanisms of the digital information. By representing information in both tangible and intangible forms, users can more directly control the underlying digital representation using their hands.


The study then overview 8 forms of TUI, but only one applies to the application “Orchestrate”.


Interactive Surfaces,Tabletop TUI - Digital Desk is the pioneering work in this genre, and a variety of tabletop TUIs were developed using multiple tangible artifacts within common frames of horizontal. One limitation of the above systems is the computer's inability to move objects on the interactive surfaces. To address this problem, the Actuated Workbench was designed to provide a hardware and software infrastructure for a computer to smoothly move objects on a table surface in two dimensions [34], providing an additional feedback loop for computer output, and helping to resolve inconsistencies that otherwise arise from the computer's inability to move objects on the table.


Although this study doesn’t have any direct relevance to the project, it is in interesting way to explore the future possibilities of TUI’s.




P177-lucchi



This paper establishes the differences between touch and tabletop tangible interfaces in the quest to find the perfect interface between computers and people. Although an interesting study, it deals with precise figures as regarding accuracy and completion time of individual actions in both on the interfaces. The studies carried out on this paper don’t posses much relevance towards the project besides the direction TUI’s are moving towards.




p253- nishino


This Paper discusses the method of camera based fiducial tracking based on new methods, unique to the topological design of the fiducial structure.



Matrix-Pattern and Pattern-Matching Approach

















  • The use of matrix-pattern to encode IDs can be frequently seen in fiducial tracking systems.

  • CyberCode, ARToolkit Plus and ARTag are examples of previous studies

Fiducial Recognition







The first one is the black square in the center, which contains one white dot. This is used to obtain the rough angle of the fiducial in the video input image, using the vector from center of the minimum bounding box of the black square to the white circle. The black and white regions are surrounding this center black square. Each dot in those regions encodes a bit, 0 for a black dot and 1 for a white dot. These bits are sorted in a clockwise order, starting from the rough angle obtained from the center circle, to decode the ID of thefiducial. Decoding these examples in Figure 5 results in their unique IDs, 48115, 64407, 40879, from left to right. Notice all these fiducials has the same topological structure. In case of 16bit fiducial, there are only 17 different topological structures, since the number of black or white bit-encoding nodes can vary only between 0-16.


The average time cost measured of 1000 frames with 12 markers in input images with capture in 640x480 resolution. ReacTIVision also has several features to increase the robustness like frame equalizer, the use of information from the previous frame and so on. Taking these features into the evaluation can increase the time cost. To compare these two different systems, we excluded the time cost for reacTIVision features that our system has not implemented yet.


This paper helps the reader further understand how fiducial tracking works and incorporate the best methods in use to optimize the speed and efficiency of fiducial tracking.

Journals Summary

Getting a Grip on Tangible Interaction: A Framework on Physical Space and Social Interaction (p437-hornecker) (2/5)

  • A research on different frameworks for Tangible interaction (esp social).
  • The increase of importance of TUIO objects with the HCI world.
  • Relies on tangibility and full body interaction.
  • Designing tangible interfaces requires not only designing the digital but also the physical.
  • Computing moving beyond the desktop and ‘intelligent’ devices spreading into all areas of life and work.
  • Applications previously not considered ‘interfaces’ are turning into such and computing is increasingly embedded in physical environments.
  • Tangible interaction, the body itself becoming an input ‘device’
  • 4 Themes:
    • Tangible Manipulation
    • Spatial Interaction
    • Embodied Facilitation
    • Expressive Representation
  • Interesting but not that significant to our project.















A Tangible Interface for Organizing Information Using a Grid (p339-jacob) (2/5)

  • A new tangible interface platform for manipulating discrete pieces of abstract information.
  • Tests the effectiveness of the new interface by comparing it to both graphical and paper interfaces.
  • The researches developed a new platform and tangible user interface for manipulating, organizing, and grouping pieces of information, which they believe to be especially suited to tasks involving discrete data, and collaborative group work. They called their new system, Senseboard.
  • System focuses on manipulating a set of information items or nodes.
  • By providing a tangible user interface for this task, they aim at blending some of the benefits of manual interaction with those of computer augmentation to achieve:
    • a natural, free-form way to perform organizing and grouping;
    • rapid, fluid, two-handed manipulation, including the ability to grab and move a handful of items at once (in contrast to mouse interaction);
    • a platform that easily extends to collaboration (unlike the conventional mouse and keyboard interface).
  • Senseboard consists of a vertical panel 1.1 m. wide x 0.8m. high, mounted like a portable whiteboard.
    • Uses small rectangular magnetic plastic tags (pucks), which are placed on the whiteboard.
    • Each tag contains an RFID tag which sends its location to the board.
    • They tested this system using a (Dell Pentium II PC) driving a video projector, projecting information onto the board and tags. Their software, written in Java and running on Windows 98, received input from the board via a serial port and sent its output to the projector.
  • Interesting but not significant to our project.







The reacTable: Exploring the Synergy between Live Music Performance and Tabletop Tangible Interfaces (p139-jorda) (4/5)

  • Researched the creation of live music with tabletop tangible interfaces.
  • Researches developed the reacTable, a musical instrument based on a tabletop interface.
  • Gives the advantages of having tangible devices to control and produce live music rather than using for example a software program on a laptop.
    • Having tangible devices to control and produce live music is very natural and similar to actually playing an instrument.
  • The reacTable, has been designed for installations and casual users as well as for professionals in concert.
  • In the reacTable several musicians can share the control of the instrument by caressing, rotating and moving physical artifacts on the luminous surface, constructing different audio topologies in a kind of tangible modular synthesizer or graspable flow-controlled programming language.
  • A simple set of rules automatically connects and disconnects these objects, according to their type and affinity and proximity with the other neighbors.







reacTIVision: A Computer-Vision Framework for Table-Based Tangible Interaction (p69-kaltenbrunner) (4/5)


  • Provides an introductory overview to first-time users of the reacTIVision framework – an open-source cross-platform computer-vision framework primarily designed for the construction of table-based tangible user interfaces.
  • The reacTIVision framework has been developed as the primary sensor component for the reacTable, a tangible electro-acoustic musical instrument. It uses specially designed visual markers (fiducial symbols) that can be attached to physical objects.
  • These fiducial marker symbols allow hundreds of unique marker identities to be distinguished as well as supporting the precise calculation of marker position and angle of rotation on a 2D plane.



  • The reacTIVision application acquires images from the camera, searches the video stream frame by frame for fiducial symbols and sends data about all identified symbols via a network socket to a listening application.
  • Uses a redundant messaging structure (OSC) over UDP transport. These messages constantly transmit the presence, position and angle of all found symbols along with further derived parameters. On the client side these redundant messages are then decoded to generic add, update and remove events corresponding to the physical actions that have been applied to each tangible object.


  • Would recommend reading this article to become more acquainted with how ReActivision works and the specifications required for example: interfacing with it in Processing; building the table and setting the camera.


Presentation

Congrats on the reading guys, some interesting stuff. We need to pull our socks up with slides though, there are still 2 slides to fill in, including a conclusion which we should come up with together (I'd suggest a skype convo later this evening), and slide 3.

Also, we may need some more input in slide 5 (Journal Slide). At the moment we have two points to talk about, however we will need two more relevant journals.

Reply to this post with conclusion suggestions, updates, and availability for tonight's skype conversation.

Thanks

Saturday, March 26, 2011

Summary of my papers

p201-jacob: Not really related to what we are doing but would be an interesting read for Novel anyway. It deals with 3 case studies related to Reality-Based Interaction (RBI), Body & Environment awareness and Naïve Physics, and explains the RBI Themes and Tradeoffs for each case.

p781-apted: Not related to our project but is a pretty interesting read. It describes the design of SharePic – a multiuser, multi-touch, gestural, collaborative digital photograph sharing application for a tabletop, specifically designed for the elderly.

p369-jorda: this seems like it could have been a promising document as it starts off with an abstract that explains that four approaches will be explored one of them being “the Reactable: a musical tabletop, and its companion fiducial tracking system reacTIVision” – unfortunately, this paper was only talking about a studio that was going to be held and acted only as a program for the studio in question. It also provides links to a website http://tangible.media.mit.edu/projects.php which is apparently “closed for repair”.

Virtual Farm: a 3D farm game for Kindergarten children. Although it is not exactly what we are working on, there some similar elements as well as some design considerations that we can adopt from it. Below are a few points:

  • Physical technologies are well suited to children, especially if they are designed to include aspects that are relevant to the child’s development: social experiences, expressive tools and control
  • Tangible and tabletop applications for children are not only for fun, but also have an educational aspect and, if designed optimally, they can help children in their motor-skill and cognitive development.
  • The design of this prototype has been based on the observation of children using the technology, letting them freely play with the application during three play sessions.
  • They used physical toys as tangible bits (by sticking the fiducials at their base). They also discuss more complex forms of augmented toys that have cameras and sensors hidden inside them
  • The design of the underlying platform focuses on robustness and simplicity in a hardware configuration that does not require high cost technology. The result is a low cost tabletop design, portable, easy to replicate and install, suitable for using in schools, or children’s homes.
  • Prototype achieved by implementing a tangible system through the adaptation of existing technology rather than developing innovative tangible technologies.
  • They also talk about a study that evaluated children between 3 and 4 years old and showed many difficulties with the interaction including frustration by the child when playing with the tabletop game when the system didn’t respond to their “little finger” interaction.
As regards to the hardware used:

  • The table surface is made of translucent material and a USB video camera is located under the table in order to read the toys the child places and manipulates on the table.
  • Unlike other tabletop configurations, this design does not show a computational image on the tabletop surface, but on a monitor disposed in front of the child.
  • There is no limit on the number of toys that can be placed and moved over the desktop (as long as there is free space on the table) This enables more than one child to play on the desktop at the same time, and opens the application space to social activities
Tangible Interaction: This is the same like ours with the moving and rotating of fiducials. They also add a function to prevent the animals from disappearing from the screen when the child makes the animal “jump” !

Quote: María Montessori: “Children build their mental image of the world, through the action and motor responses; and, with physical handling, they become conscious of reality”.


Papers I'm Reading (UPDATE)

An update on the papers i'm reading: (previous post contained a document that has already been covered)

Virtual Farm
p781-apted
p369-jorda
p201-jacob

Since Andrew has also selected his papers, the remaining 4 docs shall be covered by Sean.

Maria.

Andrew - Journals

Hi Guys

The below are the papers that I will be reading:


p103 marco
p209-xu
p307-antle
p408

Papers I'm Reading

Just a quick note to let you know which papers I'm reading:

Virtual Farm (this is so interesting!!)
p781-apted
p139-jorda
p201-jacob

Will post back in due course with a summary for the relevant documents :)

Maria.

Wednesday, March 23, 2011

New Game Mode Suggestion

What i'm imagining is a 2 player 'Versus Mode'.


  • Each player is given an instrument sound one at a time with a long time limit of like 30 Secs,
  • After player 1 gives his/her answer (display Correct or Wrong) add a 1 or 0 point accordingly to his/her score,
  • Player 2 then gets his/her instrument sound playing and has to answer which instrument is playing (and update score),
  • So on and so forth...
  • Who reaches a score of let's say 5-10 points wins the game


This will make the game more engaging and challenging whilst being educational!


Daryl - Journals

I chose to read the following 4 journals:
  1. p163-bakker (read but don't think it's much of interest regarding our project)
  2. p163-marshall (read & quoted)
  3. p191-xie (read & quoted)
  4. p2243-antle (read but don't think it's much of interest regarding our project)
An interesting journal was the one of Marshall (p-163).



'Learning Through Physical Interaction - (Marshall - p163)

In the journal an analytic framework was presented comprising of 6 perspectives that may guide research and development on the use of tangible interfaces for learning.

In my opinion the following are the most important domains we can use:
  • Possible Learning Benefits (Playful Learning, Collaboration...etc...)
  • Learning activity (Exploratory, Expressive)










Are Tangibles More Fun? Comparing Children's Enjoyment and Engagement Using Physical, Graphical and Tangible User Interfaces - (Xie - p191)


Enjoyment and Engagement

"Enjoyment and engagement are integral and prerequisite aspects of children’s playful learning experiences. They are the two primary dependent variables evaluated in this research study. The conceptual definitions of enjoyment and engagement set the scope and meaning of the terms within this research study. Each is a complex construct which may be derived from physical, social and cognitive theories."


Collaboration

"Another variable of interest related to enjoyment and engagement is collaboration. Children communicate and learn through social interaction and imitating one another. In this way they acquire new knowledge and hone their ability to collaborate with others. Inkpen et al. found that children exhibit a significantly higher level of engagement and activity when working alongside each other [12]. Sluis et al. suggest that a collaborative environment is more likely to elicit increased intrinsic motivation [30]. Working together in small groups is shown to increase children’s enjoyment, engagement and motivation [12,29]. Based on the assumption that a collaborative, co-located condition is ecologically valid and would enhance children’s enjoyment and engagement for all interface styles, a paired collaboration situation was chosen for our study design as detailed below."



This Journal basically talks about a jigsaw puzzle game on different types of User Interfaces (Traditional User Interface, Graphical User Interface, Tangible User Interface).


Below are the results:


Results of Children Preference:

"Children commented that the puzzle was challenging but that they liked it because they could finish it within the allocated length of time. Some children commented that they were concerned about how much time they had already spent and how much time they still left for solving the puzzle in the progress of play. This finding is in line with guidelines proposed by Salen and Zimmerman [27], which state that an enjoyable game balances challenge against possibility of winning. It is possible that two thirds of the pairs rated all puzzles as enjoyable because the puzzles contained right balance between challenge and achievability regardless of interface style. Children also commented that they liked getting help during play from either the reference pictures or their partner (collaboration). This result was consistent with our observations on their collaborations and use of the reference picture.


Some children indicated that they did not like it when the picture underlying the puzzle was turned off (perhaps by their partner). A few children mentioned that they disliked feeling pressured due to the time limitation. This comment was more frequent from the pairs in the GUI condition. Some children complained that there were too many pieces in GUI puzzles (which had fewer pieces than the TUI or PUI puzzles)".


Design Implications:

Based on the findings of this study we see several implications for design of tangibles for children.

  1. collaboration style was related to input design. The multiple access points afforded by a tabletop game (tangible and traditional) combined with enough space to move supported parallel independent play rather than sequential turn taking.
  2. there does seem to be a benefit to physical manipulation of objects on a tabletop space. We observed evidence of moving the body to engage in perspective taking. Direct interaction with pieces was reported as easier and less frustrating for children than indirect interaction using a mouse or touchpad.
  3. the value of integrated representations depended on the cognitive strategies being used in problem solving. For a jigsaw puzzle, children preferred a visual strategy (picture matching) to a spatial one (shape matching) and so the display of the reference picture was important. It is unclear if there was a benefit to having the picture integrated with the input space.
  4. the gap between girls and boys comfort levels with computers was not automatically bridged by using tangibles based on familiar objects.


Tuesday, March 22, 2011

Presentation 1 - update

Hey guys, just a short update that I updated my part of the presentation on Google docs.

With this 30pt font thing you'll be surprised how quickly you run out of space to write. I had initially drafted 3 short (but fully structured) sentences which did not fit when I pasted them on the Google docs presentation.

I removed the sentence structure and went for straight to the point form phrases and we had a winner :)

Also.. if Google docs doesn't let you put font size 30pt (it wast available when I tried, and u cant enter a size either) use 28pt.... its close enough. + we can always change it when we export to power point.

Good day :)

Maria.
Published with Blogger-droid v1.6.7

Kevin: Journals

Hey guys, as promised, summarized the Journals I've read into point form and placed them below. Note that the ratings next to the titles are usefulness for our project.

In my opinion, we should definitely mention the Advantages, Limitations, and Uses for StitchRV (found in Wang's Journal).

StitchRV is a software application which allows for multiple camera inputs, meaning we can increase the reactive table's range, make the fiducials smaller, or get the cameras closer to the fiducials (which is useful as it limits the bulkiness of the rig).

StitchRV is currently limited due to the restricted number of cameras, which is dependent on processing power of the workstation. (In tests using a Macbook Pro, dualcore 2.4GHz Processor and 2Gb Memory, the system could only handle 3 cameras)

Another (current) limitation is the fact that changing settings (such as number of cameras used) requires altering the source code (C++), however this shouldn't be an issue for seasoned programmers.

Also note how the second journal (Out of the Box) has been rated as a (1/5). This is not due to misleading content or lack of interesting points, however the entire journal deals with a single case study of a reactive Lego table in a toy store. Table would display 3D images of toys once their box was placed on the sensor part of the table.
All in all a good read, but I personally did not find it applicable to our cause.



Anyway, summaries or the 4 read journals may be found below, enjoy.


P6- Vandenhoven: Tangible Play (4/5)

· JJournal deals with tangible games, ideal for our scenario.

· “They [people] like game play for a variety of reasons: as a pastime, as a personal challenge, to build skills, to interact with others, or simply for fun” Quote from text, may be useful as an impact line at bottom of slide.

· Journal also specifies that some gamers prefer board games over newer games to due socialisation aspect: I do not fully agree with this, as certain modern games allow for socialising (via online play, two player mode, or multiple input such as Reactable)

· Vandenhoven in fact went on to mention the conceot of sitting around a table to play digital games (via embedded touch displays)

· The ‘Tangible Play’ workshop “brings together researchers and practitioners”, and deals with topics such as marketability of tangible games.

· Tangible Play is a one day workshop, in which a “guest speaker from Philips Research ... will provide an industry perspective on tabletop game design.”

P61- Nielsen: Out of the Box (1/5)

Deals with children’s physical toy ’marketing’ by bringing them to life at a toy store, via 3D game engine and reacTIVision game table.

· LEGO toys are designed using a 3D program. That same design tool is eventually used to create the physical moulds.

· To get point 1 to work, a reacTIVision table was constructed, with a camera located underneath the Perspex glass table, and a monitor mounted on a nearby wall.

Toy boxes all carried fiducials underneath, which when placed on the reacTIVision table; prompt the system to show a 3D model of the toys on the screen.

It is also good to point out that when they where constructing the table, they had their target audience (6-16 year old children) in mind, designing the table’s height accordingly.

· Similar reacTIVision tables include:

o “Tangible Programming exhibit at the Boston Museum of Science”

o “Battleboard 3D [which] is a mixed-reality chess-like board game”

o Storysurfer which is a “combined surface (floor: top-projected, table: back-projected) installation, which allows children to browse and select books in a library”

· The majority of this paper deals with how the authors constructed their toy store table, where they positioned it, and contains a study of who made use of it during the span of the study (gender and age group studies). As interesting as this study was, most of the data is inapplicable to our group’s scenario.

P63-Van Dam: Post-WIMP User Interfaces (4/5)

· Four styles of interface throughout time:

o 1950s-1960s: Batch processing (Punch Card Input, Line-Printer Output): No user interface due to lack of interaction with system.

o 1960s-1980s: Basic Command Line Interfaces, via “alphanumeric displays”

o 1970s: Launch of “Raster graphics-based networked workstations”, along with “point-and-click WIMP GUIs”

o 1990s onwards: Post-WIMP user Interfaces

· Van Dam considers post-WIMP to be interfaces which do not make use of “menus, forms, or toolbars, but rely on ... gesture and speech recognition”

· “the most important predictor of an application’s success was its ease of use (how “user-friendly” it is), both for novices and experienced users” Quote from text, may be useful as an impact line.

· Mentions concept of butler style interface, where workstation knows what user requires without direct user input (can be done via facial expression/hand gestures)

· Drawbacks of WIMP GUIs

o As application complexity increases, it becomes harder to learn the interface, due to an increased number of widgets and features.

o Users spend too much time “manipulating the interface” to get to the point where they wish to get to (due to point-and-click methodology)

o Mouse and keyboard input not available to all users (either because they don’t find controlling a mouse to be natural, or due to injury related issues/disability.)

o “WIMP Interfaces do not take advantage of speech, hearing and touch”

· The Future: “Hugely powerful ubiquitous computers in many different form factors”

o “Wearable Computers” (6th Sense)

o “Whiteboard sized or wall sized displays” (already in existence)

o “Lightweight, minimally intrusive head-mounted displays” (Virtual/Augmented Reality)

o Future display resolutions will be far better then the current 70-100dpi, which Van Dam considers inadequate.

P287-Wang: StitchRV: Multi-Camera Fiducial Tracking (5/5)

· StitchRV: fiducial and touch-tracking engine based on reacTIVision.

· Combines video input from multiple cameras at the same time

· Single camera setups are limited to the field of view of their single camera. Adding more cameras to the rig increases the field size, and reduces the chance of obstructions.

· Hardware: Currently designed to work with a Playstation Eye which delivers video via USB at 60frames per second. (640x480 resolution)

· Advantages of StitchRV:

o Requires little equipment

o Enables object-tracking

o Multiple cameras allow for higher resolution fiducial and touch tracking

o Through point 3, allows fiducials to be smaller (allowing for a more densely packed surface)

o Multiple cameras makes it possible to have less distance between cameras and surface (as surface range not limited by a single camera’s line of sight)...

· Current Limitations:

o Only supports two camera feeds (at the moment)

o Source Code customization (openFrameworks / C++) may be intimidating to some researchers

o Currently not a viable solution for researchers with no programming experience

o Limited by processing power

· Future work includes User Friendly Interface (to replace requirement for source code editing)