Category Archives: philosophy

Discussion of the project philosophy and historical precedents.

Wired’s Programmable World

Wired magazine has an article on what they call the “Programmable World”.  (Thanks for the pointer, Blase!).  I feel like that’s a pretty good description of the world I want to live in, so I read the article: http://www.wired.com/gadgetlab/2013/05/internet-of-things/all .

Here’s a short reaction to it.  I liked it.  Here’s a (slightly) longer reaction.

The article asserts that the programmable world will arrive in three stages:

1. More and more devices get onto the network.

2. These devices work together to automate tasks.

3. The devices are organized into a programmable software platform that can run apps.

I suppose I agree with this sequence in terms of the underlying technology, but—as is often the case—the evolution of the technology is less interesting to end users than the evolution of the end user experience.  Here’s my take on what the transformation could look like to regular folks:

1. More and more devices become available with their own iphone/android apps.  That’s a safe and easy prediction because it’s already well under way.  I just bought a Jawbone Up (an activity monitor), for example, and it comes with a free iphone app for plotting daily activity and sleep cycles.

2. Cross-device interfaces like IFTTT or Smartthings (which I’ve written about in the past) will make it possible for end-users to specify simple programs that involve multiple devices.  I’m not as sure about this prediction.  End user programming may or may not catch on (although I deeply believe it should).  Regardless of how popular they will become, these cross-devices interfaces are already here.  In fact, I bought the Up because I have a rule that says “if a device is advertised as working with IFTTT, I must buy it.”  So, I have a little iFTTT program now that turns on colored lights (Phillips Hue, which also just got integrated with IFTTT) in my apartment if I reach my per-day step goal.  It’s not at all useful and it doesn’t work the way I want it to.  But, it’s cool that it’s becoming possible to knit these gizmos together at all.

3. We move from a sensor-action model to an event-activity model.  I’m not really seeing this stage in the real world yet, but the more I play with programmable devices, the more I think the whole enterprise flounders without it.  Here’s a concrete example.  The Wired article talks about  automating our daily tasks.  They mention that a computational system should be able to observe the rules you the user follow and then adopt them itself.  So, if you behave according to the rule “If  the sun hits your computer screen, then you lower a shade,” then the computer could do it for you and save you a step. That’s fine, but here’s the thing.  That’s not really the rule we follow.  It’s something more akin to “if I need to see the screen and I can’t, do something to make the screen visible.”  There is no sensor that can directly measure whether you can see the screen.  Such an event needs to be inferred from a set of observables like the brightness of the screen, the direction you are looking, how long it has been since the mouse was moved on the computer, the time of day, whether the shades have already been pulled, etc.  And the action of pulling the shades isn’t always the right response.  It might be more effective to brighten the screen, rotate it a little, or perhaps dim the lights in the room.  We need a computing infrastructure that (1) can integrate information across devices, (2) infer the desirability of possible interventions, and (3) orchestrate a coordinated response across devices.

From the end user’s perspective, this last step requires careful interaction with the devices.  They need to learn from you without distracting you.  The line between teaching, training, and programming begins to blur as we tell the computing systems what matters to us, how it can be measured, and what to do in response to undesirable situations.

Well, that’s the trajectory, anyway.  For now, the best bet is to focus on getting an infrastructure in place so we can start experimenting with different models of interaction.  We have lots of goals for this summer and I hope to be able to report on exciting and steady progress over the next few months!

Welcome to the Future

My name is Michael Littman.  I’m a computer scientist at Brown University and I’ve had the privilege to work with a team of creative and energetic people at Rutgers University on a project we call “Scratchable Devices”.  Welcome to our blog!

The aspiration of the “Scratchable Devices” team is to help move us to a future in which end-user programming is commonplace.  The short version of the pitch goes like this.  We are all surrounded by computers—more and more of the devices we interact with on a daily basis are general purpose CPUs in disguise.  The marvelous thing about these machines is that they can carry out activities on our behalf: activities that we are too inaccurate or slow or fragile or inconsistent or frankly important to do for ourselves.  Unfortunately, most of us don’t know how to speak to these machines  And, even those of us who do are usually barred from doing so by device interfaces that are intended to be friendly but in fact tie our hands.

We seem to be on the verge of an explosion of new opportunities.  There are new software systems being created, more ways to teach people about programming, and many many more new devices that we wish we could talk to in a systematic way.  The purpose of this blog is to raise awareness of developments, both new and old, that bear on the question of end-user programming.  I welcome email from interested readers who have spotted something cool (mlittman@cs.brown.edu), although I don’t claim to be able to present anything more or less than my own personal take on what I read.

In the days to come, I will sift through my backlog of interesting tidbits and then maybe we’ll be in a good position to start figuring out what the future could look like.