Archive for the ‘SCIENCE!’ Category

Recently, MobileSyrup released their predictions for wearable technology (wearables). In it, their top two predictions are wearable tech comes into focus with AI and 2017 will be the year for hearables (smart ear buds). In April 2015, I sent this email to the author of Introducing Data Science: Hearing aides on the brink of a paradigm shift (an article from 2014):

I recently attended a startup event where a company (Sensaura) showcased a technology that can determine a person’s emotion using only their heartbeat. This seems like it would be a perfect tool for improving hearing aid satisfaction in real-time. Specifically, if a hearing aid had an integrated heartbeat sensor (ex: Valencell), this signal could be sent to a smart phone via bluetooth (or even cellphone, if low enough latency) along with current auditory conditions. From this information, it would be possible to use machine learning to obtain useful information between emotional states and environmental sound properties. Using his information, it is possible to progressively modify the hearing aid parameters (a sort of self-adjusting loop) and reduce negative emotional reactions of hearing aid use. A self-adjusting hearing aid.

It’s interesting to re-read the article and email with the perspective of the state of tech in 2017. This truly is an industry in change and it’s just starting right now! Two months after that email, Doppler Labs launched a successful Kickstarter campaign for a “smart” wireless earbud. Since since then, Doppler Labs raised a total of $50.1 million dollars and other companies have emerged. I’m putting the email out there in case it gives someone a cool idea for something in this space, or if someone want to talk about this exciting opportunity!


Read Full Post »

A neuroscientist’s solution for VR’s next frontier

Virtual reality (VR) is the next big thing. I honestly believe it will change everything the way smartphones have changed everything. What excites me the most about VR is that it lets us consume digital content without it being trapped on a screen. This means that in the (very near) future, we will not need to look at our phones to check our emails or sit in front of a TV to watch a movie. All our electronic devices, content, and information will just be around us in the most unobtrusive fashion, almost like magic.

What is VR from the point of view of the brain? Our reality is formed from vision, hearing, touch, and body-sense inputs. Beyond these individual senses though, our reality is formed by how these inputs work together. VR, in effect, is the process of manipulating the “external” senses – vision, hearing, touch, the senses that tell us about our environment. Currently, VR can manipulate what the brain sees. Doing this is tricky though because we not only need to totally control what goes into the eye, but you also need to take into consideration the body-sense (proprioception) of where the person is looking. If you move your head in the real world, you see something different (you go from seeing a screen to seeing a wall). To create this illusion in VR, you need to precisely measure the movement of the head and change what is presented to the eyes in near perfect synchrony. The first huge milestone of VR was to synchronize the visual input with the movement of the body. We can also control what we hear and synchronize that with what we see. That’s where the technology is right now. The second milestone will be the ability to touch objects that are seen, but aren’t actually there.

How do you touch something that isn’t there?

Gently touch the side of an object. How does your brain know the object you touched is really there? Like I said earlier, our perception is based on how our senses work together. Here’s what happens when you touch something:

Vision: You see a hand touch an object. This hand is attached to an arm that is attached to something that seems attached to you (but you don’t actually know it’s attached to “you” without looking in the mirror, do you?).

Hearing: You may have heard a sound come from the direction of the object.

Touch: You felt something touch your finger.

Body-sense: You had the sense that you controlled the movement of your finger in a certain direction.

Your brain automatically takes all of these entirely different informations and combines them into a cohesive percept. Why does it do this? Because every time that sequence of event has ever happened, they were related. Every time you ever saw a hand coming out of near where you see touching something, what you saw, heard and felt were all related. You’ve literally been teaching your brain this association for your whole life.

Now, with VR, we can change what the brain sees. On top of that, we can put our body-sense in VR by tracking the movement of our limbs and seeing that. In its current state, VR can effectively trick our vision, audition, and body-sense. Not only can we see and hear a completely virtual environment, we can see ourselves in it. Our brain sees the virtual environment and it sees arms that move perfectly in sync with what we tell our body to do. The brain associates that the body it sees must be its own body, because that was the case every time that situation had ever happened before. Before you know it, you’re no longer in your living room, you’re in a virtual world. That sense of being in that virtual world is a phenomenon called “presence” and it’s being reported today by people experimenting with VR.

So what’s the next step? You see a virtual world and you see a virtual body. Your brain figures that’s your body and what you see represents where you are, because that’s how it’s always been. Ever. From a brain point of view, this is the current limit of VR. You can see this world and you can even believe you’re in it, but it all falls apart if you try to touch anything. You try to touch a box but your finger goes through it. Your brain is confused at first, but it knows this isn’t real. This is the limit of VR. You can believe you’re somewhere else, as long as you don’t touch anything.

So how do you touch something that isn’t there?

WARNING: This part is 100% theoretical. To my knowledge, no one has tried this. This should work, but until it’s tested, it’s only an educated guess.

With this device:

Touch v3.gif


What this device (lets call it TouchVR) does very crudely is simulate touch. Put simply, when the virtual finger touches the virtual object, TouchVR actually touches the finger. The brain then figures the hand it saw touch something is probably related to the sensation it felt on the figure. This is all combined and it forms an an effective multisensory illusion (an illusion involving many senses). Plus, the device weighs hardly anything (little under 3 grams), so it feels like your hands are free.

You may be asking yourself, why don’t you take this idea and sell for millions?! I wish it were that easy. Unfortunately, I’ve reached my technical limits and I still don’t know if this actually works, in that the brain will think it actually touched something. Instead of keeping this idea to myself, I figure it has a greater potential if I share it and work on it with other passionate individuals.

Assuming I’m right about this illusion, here are the next two steps to bring touch into VR:

  1. Build a hardware interface between TouchVR and the computer. I started to attempt this (with the help of the great people at FouLab, a local hackerspace), but realized I was way out of my depth when I was Googling the difference between P.N.P. and N.P.N. transistors because I had bought the former, not knowing anything about transistors.
  2. Program Unity to interact with TouchVR. With the help of a friend, we managed to get a light to turn on when virtual box touches a virtual sphere, but this is as far as I managed to get this. What needs to happen is to get that light to turn on when a hand, which is mapped into Unity using Leap Motion, touches a virtual object.

There will undoubtably be a million other steps, but those are the two next big ones to see if this works.

If TouchVR works the way I think it should, when you would grasp a virtual object and feel something on your finger, your brain should send a signal to your hand to stop moving your fingers. This is called a top-down process: a signal from the brain to the hand. This means that since the brain will know that every it has ever seen a hand that it controlled grasp something from a first-person perspective and feel something on the finger, something was actually there, it will send a signal o the hand to stop moving the fingers, because why keep on pressing when there’s something there. To my knowledge, no one has ever experienced this. No one has every reached for something that wasn’t there and had their brain say “you don’t need to close your hand anymore, you’re holding a solid object” without there actually being a solid object. I don’t want to get too much into the ramifications of this, if it works, but that would mean that we could trick our brain into thinking things that aren’t there are there. This means that things, real things you can see and feel, no longer need to actually exist. Think about about for a second.

Again, why am I posting this on the internet where someone could come and steal this idea? I’m at my technical limit and my curiosity is getting the best of me. I want to see if this works, and if it does, I want to see this concept’s full potential! To do this, I need help from someone who has skills I don’t. Maybe you’re that someone or maybe you know that someone. I think VR is going to change everything and I want to be part of that change. If you feel the same way, maybe we can work together and shape the future!

Read Full Post »