1. PhysComp & ICM Finals: Assistive AR for the visually impaired.

    For our Physical Computing final, my project partner Sajan and I decided to work on an assistive project for the visually impaired. Although we had many ideas, we realised that we really had no idea what the particular challenges and needs were that a blind person faced on a daily basis. We then set up a meeting with Walei, a young man who had gone blind much later in life. Meeting Walei put paid to a lot of our preconceptions and ideas. When we asked him to talk to us about his day to day experiences and his opinions on what could be ameliorated, his answers gave us wonderful insight into the unique ways in which he compensated for his lack of sight, including learning his way around New york by memorising it.

    After talking to Walei and to Steven Landau, whose company creates aids for the blind, we decided to work on a device which would allow them to recognise the poeple in front of them or who are addressing them.

    Using AR markers to act as name tags, we thought of using mobile phone cameras that would recognise the AR tags on the person in front of them and give them information back about the person via audio (using the phone itself). The tag could contain information such as the person’s name, profession and any other info that could be useful in helping the visually impaired person to understand the person in frot of the better.

    My partner was also interested in working on a motorised Braille device, that would move to represent the person’s name in Braille as soon as the AR marker was read. He set out to work on this.

    Having never worked with Android before, I asked for help and used the resources on the internet to figure out a way to program the phone to playback audio files and also to do image tracking. In time, and with plenty of help from Craig, I was able to program the phone to play back audio files. We also found an openGL library for the android that allowed us to do marker tracking. The phone’s camera recognised the marker and a 3D rectangle popped up.

    From here, we thought it would be simple enough to change the code to make the audio files play instead of having the shape appear on recognition of the marker. However, it seems it is not quite that simple. While I can get the recorded MP3 files to upload and play separately on the phone and and also have the camera recognise the marker and give back an image, I ran into trouble with merging the two. In order to solve this I need to get deeper into the Android programming, which is where the problem lies. That is Step 2. Once that is done, technically at least, the phone will be able to read different AR markers and give audio feedback about the person wearing the specific AR tag.

    I am continuing to work on the Android to get it to the point where the phone can read the markers and playback the relevant MP3 files. The concept works via Processing, where the recognition of a markers plays back the audio but without mobility, this application would be futile, so the most important step is to get it to work on Android.

    This project has been the hardest and most frustrating one for both Sajan and I, but it’s also been the one we’ve gotten the most out of. Sajan struggled with the mechanics of making the motorised Braille work (more about this on his blog - http://itp.nyu.edu/~sr1971/myBlog/?cat=13). In the process, he learned a lot about different materials and their possibilities and limitations. I struggled with my lack of knowledge with the Android, but it also gave me the opportunity to start learning Android programming and get to know the most widely used mobile platform in the world. A lot of it seemed familiar because both Android and Processing have Java in common. However there are features peculiar to Android programming that I’ll be able to understand only by completely learning it. It was incredibly exciting though to be able to program something and upload it to the phone, only to see it as a feature on the phone! I can safely say I’m addicted :)

     
  2. Physical Computing assignment-Week 4 : Stupid Pet Trick

     
     

  3. Phys.Comp: Stupid Pet Trick-GSR Sensors

    For the stupid pet trick project, my lab partner Sajan and I decided to work with biofeedback sensors that allows us to measure changes in a person’s state of mind. We agreed on GSR sensors to measure changes in skin resistivity. We set about making the GSR sensors as we thought it would be more fun, and it was! Following Mustafa’s example, we tried to make them from copper cents, but found that that we could not solder the wire to it well enough to make it stick. So we moved onto aluminum strips which Sajan stripped out of a junk shop monitor.

    We found these easier to solder. Once we had the wires soldered to the aluminium strips, we hot glued them for good effect.

    After we connected the GSR to the arduino, we ran the serial monitor to see the range of the values. We found that we’d get good values for a while and then in a few hours, they would be all over the place. We were told that the fault lay with the sensors. As the GSR’s were crudely fashioned out of aluminium that had a shiny coat of something to begin with, the readings were not that stable.  We scraped at the foil with sandpaper to try and get it to be more sensitive. On Tom’s suggestion, we also made new sensors that were smaller and thinner. We found that these gave us better values. we tested the sensor on a few people and found that the readings changed with the introduction of different topics. Since the whole premise was based on having to interpret these readings to reflect mood changes, we talked to the subjects about different topics and found that a non-excitatory talk gave us readings in the ranges of 1-2, while excitatory ones gave us values in the range of 10.

    Sajan and I decided to connect to processing in order to represent these values visually. Through this exercise, we learned to use serial communication to connect arduino to Processing. In Processing, we set up the code to represent 2 values through pictures. we called one value “calm” and the other one “angry”. Once we had Processing set up, we tested the device on me. We found that I was able to affect the readings through changes in physiology and breathing. For eg, when i wanted to simulate anger, I worked on recreating that emotion in my body. My body would go tense, my shoulders would stiffen, and my breathing would turn shallow. This would take the range upto 10/11 and this range was displayed as an image in Processing. Conversely, when i worked at a “peaceful” state of mind, my breathing would deepen and my body would relax. The values then reflected as 1 although the transition to this  value was slower than the other way round. This value too was shown as a different image in Processing Unfortunately, this was the part i did not document as we were just testing it. Lesson learnt: Document at every stage, even if you think it’s  harmless test that won’t work. Chances are, it’s the only time it will work.

    While it did work the next day in class, i did not document that either. However, on this day, it was a little less stable. We then figured that the GSR was acting up again and that it worked best upto a couple of hours after it was created. After that, it’s effect deteriorated.

    Although the process of making and working with a GSR was fascinating, I’m not interested in carrying on with this project by introducing a heart rate monitor and later a GPS sensor. Heart Rate monitors are supposed to give better readings and it would be interesting to see whether that in conjunction with the GSR can paint any sort of accurate picture of a person’s mood. I’m also looking at ways in which i can introduce context awareness, so that the emotion can be linked to a particular context/location. The latter could possibly be achieved through GPS which can then be mapped onto Google Maps or some such app, but the former is a little trickier. A few options are cameras that record throughout or a manual input by the person who records their state of mind at given intervals.

    This was an incredibly fun project and through the process we ended up learning about the ease/difficulty of soldering onto different materials, creating sensors from scratch, Serial Communication, etc.