Category Archives: upod

Descriptions of our own efforts to create user-programming of devices. Or is it universal programming of devices…?

Wired’s Programmable World

Wired magazine has an article on what they call the “Programmable World”.  (Thanks for the pointer, Blase!).  I feel like that’s a pretty good description of the world I want to live in, so I read the article: .

Here’s a short reaction to it.  I liked it.  Here’s a (slightly) longer reaction.

The article asserts that the programmable world will arrive in three stages:

1. More and more devices get onto the network.

2. These devices work together to automate tasks.

3. The devices are organized into a programmable software platform that can run apps.

I suppose I agree with this sequence in terms of the underlying technology, but—as is often the case—the evolution of the technology is less interesting to end users than the evolution of the end user experience.  Here’s my take on what the transformation could look like to regular folks:

1. More and more devices become available with their own iphone/android apps.  That’s a safe and easy prediction because it’s already well under way.  I just bought a Jawbone Up (an activity monitor), for example, and it comes with a free iphone app for plotting daily activity and sleep cycles.

2. Cross-device interfaces like IFTTT or Smartthings (which I’ve written about in the past) will make it possible for end-users to specify simple programs that involve multiple devices.  I’m not as sure about this prediction.  End user programming may or may not catch on (although I deeply believe it should).  Regardless of how popular they will become, these cross-devices interfaces are already here.  In fact, I bought the Up because I have a rule that says “if a device is advertised as working with IFTTT, I must buy it.”  So, I have a little iFTTT program now that turns on colored lights (Phillips Hue, which also just got integrated with IFTTT) in my apartment if I reach my per-day step goal.  It’s not at all useful and it doesn’t work the way I want it to.  But, it’s cool that it’s becoming possible to knit these gizmos together at all.

3. We move from a sensor-action model to an event-activity model.  I’m not really seeing this stage in the real world yet, but the more I play with programmable devices, the more I think the whole enterprise flounders without it.  Here’s a concrete example.  The Wired article talks about  automating our daily tasks.  They mention that a computational system should be able to observe the rules you the user follow and then adopt them itself.  So, if you behave according to the rule “If  the sun hits your computer screen, then you lower a shade,” then the computer could do it for you and save you a step. That’s fine, but here’s the thing.  That’s not really the rule we follow.  It’s something more akin to “if I need to see the screen and I can’t, do something to make the screen visible.”  There is no sensor that can directly measure whether you can see the screen.  Such an event needs to be inferred from a set of observables like the brightness of the screen, the direction you are looking, how long it has been since the mouse was moved on the computer, the time of day, whether the shades have already been pulled, etc.  And the action of pulling the shades isn’t always the right response.  It might be more effective to brighten the screen, rotate it a little, or perhaps dim the lights in the room.  We need a computing infrastructure that (1) can integrate information across devices, (2) infer the desirability of possible interventions, and (3) orchestrate a coordinated response across devices.

From the end user’s perspective, this last step requires careful interaction with the devices.  They need to learn from you without distracting you.  The line between teaching, training, and programming begins to blur as we tell the computing systems what matters to us, how it can be measured, and what to do in response to undesirable situations.

Well, that’s the trajectory, anyway.  For now, the best bet is to focus on getting an infrastructure in place so we can start experimenting with different models of interaction.  We have lots of goals for this summer and I hope to be able to report on exciting and steady progress over the next few months!

Some things About SmartThings

It’s been a busy semester and I haven’t been able to find time to write.  And the longer I wait, the more I feel like the next post should be really good to justify the gap.  But, now it’s Spring Break and maybe I should just write something bad just to take the pressure off.  🙂  Here goes.

Yesterday, I was fortunate to have a visit from the former CTO of iRobot, the makers of the Roomba vacuum cleaning robot and perhaps history’s only profitable robotics company.  Tom Wagner stopped by mainly to talk about the Human-Robot Interaction initiative my colleagues at Brown and I are trying to sell.  But, since he was there, I took the opportunity to tell him a bit about the end-user programmable devices project.  Blase Ur participated from Pittsburgh via a robotic telepresence device called a Vgo.


We demoed several programmable devices—a fan, a blender, and two strings of colored lights—and asked Tom for feedback.  He was amazingly insightful and refreshingly forthright on a number of issues.  Unfortunately, many of the juiciest tidbits he shared were prefaced with “Don’t blog this, but…”.  Oh, well.  But, one excellent comment he made that got us thinking was “How does your project relate to SmartThings?”

Mmmm.  Well, I didn’t recognize the name at first, but SmartThings is a project that Vukosi Marivate brought to my attention as a great example of work relevant to end-user programming that was seeking funding through KickStarter.  I probably should have followed up right away when he first told me, but I always feel very much teased by KickStarter campaigns.  The pleas for funds are very compelling, but so many of the amazing ideas never materialize that it’s often a big letdown.

Briefly, SmartThings is a venture to create a networking infrastructure for programmable physical devices.  In the words of their homepage, “SmartThings adds intelligence to everyday things in your world, so that your life can be more awesome.”  Huh.  Awesome is good.  They’ve got over a million dollars in funding through KickStarter and three million in seed funding.  They’ve got over 30 people working on programmable devices.

Their vision is to have a platform so that developers can create apps that interface with physical devices.  They’ve got a cool hub that all the various devices in your home can talk to.


The hub acts as a relay station for messages between the devices and the SmartThings cloud server.  Then, you can create apps that communicate with the cloud to monitor and control the devices.  It’s even possible to reconfigure and automate the behavior of the devices.  In addition to their own custom devices, they are hard at work making other wifi devices compatible with their system.  All the devices I’ve bought for my apartment in the last few months appear in their preliminary list of compatible devices, which is very encouraging!

I think SmartThings is an exciting development, but it’s hard not to feel a little competitive with them.  They did a demo of their system January 2013 that features the creation of an app that monitors a switch to turn a light on and off.  The audience loved it.  But, our upod project demoed precisely that same functionality (and more!) in November 2011.  We had a cloud server, a switch, lights, and a programming interface.  We didn’t have dozens of people and millions of dollars.  We had a handful of undergraduates and a few thousand dollars for parts.

For me, the frustrating part is that nothing like SmartThing existed back in 2010 when we started.  If it had, we wouldn’t have put the energy into creating our infrastructure and would have used theirs.  But, we couldn’t use theirs because it didn’t exist.

Based on their demo, it appears that they have two programming interfaces for their system.  One is very very close to the When-do interface we built for upod (which is, itself, inspired by the if-this-then-that design).  In our experience, this interface style is extremely easy to use, but also quite limited in its ability to express anything beyond the most basic programming functionality.  Their second programming interface is essentially raw code—lots of keywords and curly braces and colons.  It provides much more complete functionality but at the expense that non computer scientists would find it impenetrable.

In the upod project, we experiments with a kind of middle ground, which is a fully functioning programming language that is friendly and easy to use.  Specifically, we extended Scratch with blocks for listening to and controlling devices.

Looking ahead, we think there’s a lot more that needs to happen to make device programming friendly for the end user.  In particular, real-world devices get their information from real-world sensors.  Whereas if-this-then-that has recipes that can be specified with absolute precision like “When a new book is added to Kindle Top 100 Free eBooks, send me an email”, real-world sensors are more vague and messy.  Consider a recipe that appears in a screen shot in SmartThings’ advertising video: “when it’s going to rain, check if Barkley is in the yard”.


smartthings1 (1)


It’s a perfectly reasonable recipe and exactly what you’d want to be able to say as an end user.  But, questions like whether it’s going to rain or if a dog is outside hide a tremendous amount of complexity about how information is gathered from the physical world.  I think it’s terribly unlikely that SmartThings has this functionality working in a way an end-user can leverage yet.  But, it’s a lovely and important research question.  And, it’s what I’m anxious to work on assuming SmartThings really has solved the infrastructure problem.  I can’t wait until summer!