1. PhysComp & ICM Finals: Assistive AR for the visually impaired.

    For our Physical Computing final, my project partner Sajan and I decided to work on an assistive project for the visually impaired. Although we had many ideas, we realised that we really had no idea what the particular challenges and needs were that a blind person faced on a daily basis. We then set up a meeting with Walei, a young man who had gone blind much later in life. Meeting Walei put paid to a lot of our preconceptions and ideas. When we asked him to talk to us about his day to day experiences and his opinions on what could be ameliorated, his answers gave us wonderful insight into the unique ways in which he compensated for his lack of sight, including learning his way around New york by memorising it.

    After talking to Walei and to Steven Landau, whose company creates aids for the blind, we decided to work on a device which would allow them to recognise the poeple in front of them or who are addressing them.

    Using AR markers to act as name tags, we thought of using mobile phone cameras that would recognise the AR tags on the person in front of them and give them information back about the person via audio (using the phone itself). The tag could contain information such as the person’s name, profession and any other info that could be useful in helping the visually impaired person to understand the person in frot of the better.

    My partner was also interested in working on a motorised Braille device, that would move to represent the person’s name in Braille as soon as the AR marker was read. He set out to work on this.

    Having never worked with Android before, I asked for help and used the resources on the internet to figure out a way to program the phone to playback audio files and also to do image tracking. In time, and with plenty of help from Craig, I was able to program the phone to play back audio files. We also found an openGL library for the android that allowed us to do marker tracking. The phone’s camera recognised the marker and a 3D rectangle popped up.

    From here, we thought it would be simple enough to change the code to make the audio files play instead of having the shape appear on recognition of the marker. However, it seems it is not quite that simple. While I can get the recorded MP3 files to upload and play separately on the phone and and also have the camera recognise the marker and give back an image, I ran into trouble with merging the two. In order to solve this I need to get deeper into the Android programming, which is where the problem lies. That is Step 2. Once that is done, technically at least, the phone will be able to read different AR markers and give audio feedback about the person wearing the specific AR tag.

    I am continuing to work on the Android to get it to the point where the phone can read the markers and playback the relevant MP3 files. The concept works via Processing, where the recognition of a markers plays back the audio but without mobility, this application would be futile, so the most important step is to get it to work on Android.

    This project has been the hardest and most frustrating one for both Sajan and I, but it’s also been the one we’ve gotten the most out of. Sajan struggled with the mechanics of making the motorised Braille work (more about this on his blog - http://itp.nyu.edu/~sr1971/myBlog/?cat=13). In the process, he learned a lot about different materials and their possibilities and limitations. I struggled with my lack of knowledge with the Android, but it also gave me the opportunity to start learning Android programming and get to know the most widely used mobile platform in the world. A lot of it seemed familiar because both Android and Processing have Java in common. However there are features peculiar to Android programming that I’ll be able to understand only by completely learning it. It was incredibly exciting though to be able to program something and upload it to the phone, only to see it as a feature on the phone! I can safely say I’m addicted :)

     
  2. Physical computing midterm project ‘Fake Shake’, finally working wireless-ly over Bluetooth

     
     

  3. Phys.Comp: Week 7, Serial Communication w/ multiple sensors

    This lab was very interesting and instructional especially because it forms the basis of our media controller project. Our project is based on a triple axis accelerometer and  the three axis have to be sent to processing as three separate values, while the switch was the forth value. The following are the 4 values and the graph obtained from them:

     
  4. Physical Computing assignment-Week 4 : Stupid Pet Trick

     
     

  5. Phys.Comp: Fantasy device

    My fantasy device would consist of a social mobile AR app which would allow for information that people choose to share about themselves to be overlaid over their physical selves and which can then be “seen” through mobile phones. While this may seem overly intrusive, in reality, this is no different from sharing one’s information on facebook or twitter. The information can be kept private, can be shared with friends or a selected group of people or it can be open to all. Another possibility would be for the device to recognise that another person nearby shares a similar interest/destination/taste in music in common and this would be communicated to us as a ping. Through this, a person can decide whether they would like to communicate with the other person.

    My interest in this stems from the fact that today, as people gather together all the time, sharing common experiences like taking the metro, waiting in line at the supermarket, visiting the same cafe every morning, there is no real opportunity for connection. How does one start a conversation with a stranger. Many times, we just need an opener. If I knew that the person is front of me was planning on attending a concert of the band i like, or if his status tells me that he’s been up all night with a crying baby, does this change the scope and nature of casual acquaintances. I read somewhere that most of our breaks are given to us by our weak connections. A social mobile AR app that allows people to make connections they wouldn’t otherwise perceive could increase the number of weak connections we make and sometimes convert them to deeper relationships. I also see the potential for the breaking down of cultural and other stereotypes, as outer form takes a backseat to people’s narrative about themselves.

    Another aspect is emergency information, like knowing someone’s blood type, or any allergies he may have.

    Facebook and Twitter have demonstrated that people are able and willing to share deeply personal information and thoughts. While there are aspects of this fantasy device that lend itself to troubling scenarios, with the right safeguards, this device could facilitate connections that otherwise may not be made and could potentially be an important source of information in emergencies.