active art, arduino, art, human computer interaction, programming, Throw, web development

Active Art Project, “Throw”

I’ve recently started on an art project that’s going to combine 4 of my absolute favorite activities, creating digital art, web-development, construction, and throwing balls at things. I’m so excited about this idea!

This project has been in the planning/research stages for a while, but I’m still very early on, and I want to write about the project and the process as it progresses here.

What is it?

There are several ways I could answer that, let me start with how one will use it.

You’ll start off holding a tennis ball, standing 10 – 60 feet away from a plywood board with a frame around it. On this board, you’ll notice an 8×8 grid of 4″ squares painted on it. An LED light will shine from one of these squares, and you’ll be instructed to hit that square with the tennis ball.

You’ll throw the ball, and a monitor, or nearby tablet will blink and give you a score, showing you where you were supposed to hit and where you hit. There will be two grids on this screen, grid one will show a color plotted on the square you were supposed to hit, and grid two will show the same color plotted wherever you actually hit.

After repeating this process 63 times. You’ll end up with a final accuracy score and two grids, one will be the source image you were (unknowingly) attempting to recreate, and the second will be your recreation. Theoretically, if one had perfect aim, the two would be exactly the same.

These two images will be automatically uploaded to a web-app where they will be added to two larger images. Image 1 gets placed, like a puzzle piece among other source images, to ultimately create the larger original source image from which it came. Image 2 (the recreation attempt) gets placed, similarly among the other recreation attempts created by the throwers before you, to ultimately create a collective recreation attempt of the larger original image.

How does it work?

Well, that’s where it gets interesting.

Basically, how the board will work, is that a couple inches off the board’s surface, built into the surrounding frame, there will be 2 arrays of 36 light sources (laser diodes (or LEDs), arranged along the X and Y axes of the board (72 total). Directly across the board from each of these, will be a corresponding light sensors (phototransistors or photoresistors). This will create a grid of beams, all connected to an Arduino Uno microcontroller.

When an object passes through this beam grid, it will temporarily block some of the beams. By monitoring which beams are blocked on both the x and y axes, the Arduino & computer it’s connected to, will determine (approximately) where the ball hit the board, and feed this information into the web-app. Voila!

So I know how to build the web-app, how to build the wooden frame, and definitely how to throw tennis balls at things, but electronic hardware, well, that’s not my strong suit. Hence, me joining Maui Makers and learning about Arduinos. People on the Arduino forums have also been very helpful, if you’re interested, check out the Arduino forum thread where I’m getting help.

I’ve sketched out basic diagrams of how this will work, and have started researching the exact hardware I’ll need to make it happen. The more I research and talk to people about the idea, the less simple it seems. I’m realizing making this theoretical idea a physical reality is quite a bit more complicated than I initially thought. But luckily, every time I talk to people we come up with even more potential applications for a sensor board like this.

Once the hardware is built, the applications are seemingly endless. here are just a few that have come up.

  • Use it to play Battleship. But no calling ‘A4’, you just try to hit a spot with the ball, and wherever you hit, that’s where you hit.
  • Use it as an instrument, block different beams to play different notes.
  • Use it for pitching practice, to track accuracy.
  • Use it to draw/paint by dragging your hand through the beam grid. Or you could choose a color on a tablet, then throw a ball to plot that color wherever you hit on a grid.
  • If other people built these sensor boards, we could have people all over the world throwing at boards and contributing to the same images, or playing Battleship remotely.
  • Group throwers by skill level or age, and have them recreate the same source images. How does that affect the fidelity of the recreations?
  • People could take photos of themselves with a phone, then that photo could become the image they’re recreating by throwing at the board.

The possibilities are endless. And beautifully, it’s a way to be ACTIVE while interacting with computers and creating art.

 

Standard
brainstorm, human computer interaction, ux design

Brainstorm on Physically Active Computer Interfaces

*Apologies for the scattered, incomplete nature of this post, this was really a brainstorming exercise, but I figured, why can’t it be public brainstorming? So here goes:

 

Problems:

I feel much better physically when working a service job that requires movement, but I’m interested in working on computers. I can feel the physical/mental toll for spending too much time in front of a screen. Sitting all day is terrible for your posture and physical fitness. On days when I’m off work and do a lot of walking and moving, it’s amazing how much better I feel.

Traditional keyboard/mouse input is outdated and limited.

 

Solution:

As we spend more time interacting with computers, we should find ways to be physically healthy while we do it.

 

We need a shift in paradigm for general consumer computer interfaces. More intuitive interfaces built to capitalize on how our bodies/brains evolved to function best. Instead of cramping our hands moving a mouse around a tiny square of desk to draw things, we should be swinging our whole arm around.

There are tons of ways we could be interacting with computers, we’ve barely scratched the surface. O

 

Thoughts:

Room with projection on wall, or oculus rift VR viewing.

Pick up and move physical blocks to change windows or programs.

Exercise taxes to de-incentivize unwanted behavior (push-ups to open Facebook, burpees to open Netflix, etc).

Associate different tasks with different areas of space and movements, helping to improve memory by activating more areas of the brain to connect with ideas and tasks. (cite spatial memory techniques discussed by Ed Cooke in Tim Ferriss Show #52).

Background colors, music or imagery changes for context switches, ie, when you want to program there are peripheral backgrounds of mountains or something, then when you want to email you switch to backgrounds of cityscapes or something.

Soundscapes or music could change as well. An example of this idea already in use is workout music. When I workout or run, I’ll often listen to intense music about overcoming hardship or toughness (2pac, Talib Kweli, Wale, even, embarrassingly, Eminem) which serves as emotional motivation to push myself harder. Or I’ll use upbeat electronic music to put myself into a trance to disassociate from physical pain when pushing hard. We could program our environments/interfaces to change throughout the day based on what tasks we’re trying to do.

Punching bags or something to close programs?

 

Questions:

How could users network, manipulate others’ environments, share?

Would productivity due to speed loss (if you had to move something physical to perform a task, rather than just click something) be outweighed by overall productivity, happiness and longevity caused by better health, more focus, less distractions, stronger mental associations?

What could be a small-scope initial application for a test? X-Box Kinect to control some interface?

Standard
natural scrolling on the apple magic mouse and trackpad
human computer interaction, ux design

Natural Scrolling Vs. Reverse Scrolling

Occasionally, when using someone else’s laptop, I swipe my fingers up or down the trackpad to “scroll” a page up or down, and to my surprise, it move the opposite way I expect it to. It’s because a few years ago I switched to Natural Scrolling. For those that don’t know, here’s the difference between the two:

Reverse: Swipe fingers up on trackpad, magic mouse, scroll-wheel, content goes down, scrollbar goes up.

Natural: Swipe fingers up on trackpad, magic mouse, scroll-wheel, content goes up, scrollbar goes down.

Many people are used to reverse scrolling, because when scroll wheels were introduced to mice, they were linked to the indicator in the scroll bar, which controlled the viewport on a page.

In 2011, when Apple made natural the default mode in OSX Lion, I thought it was purely a matter of preference and I was already used to reverse scrolling so I opted to stick with reverse. But while experimenting with different types of workstations, I got to thinking about the concepts of the two types of scrolling, and suddenly it dawned on me that I WAS WRONG!

Natural scrolling is flat out better than reverse scrolling, and here’s why.

Firstly, I find the concept of scrolling itself a little problematic. I like to think of moving the content itself, rather than scrolling a page. The scrollbar on the screen is merely an indicator of the position of our viewport. But our screen/viewport is almost always stationary relative to us. So if it’s not moving around, why are we pushing it? Instead we should be pushing the content, itself, which does move (virtually, anyway).

In the physical world, stuff usually moves when you push it, that’s what our brains expect. And it’s my belief that interfaces should be as intuitive as possible to free up our cognitive capacity for more important tasks than navigating space (like creativity, expression & critical thinking). With natural scrolling, when you push up on a trackpad, the content moves up. If you lay a piece of paper on your desk, and push up (forward) on it with two fingers, which way would it move?

What really drove this concept home for me was when I was experimenting with a standing desk with a monitor down low by my keyboard and mouse, angled up at me. In effect, mimicking a table touch screen computer (minus the touch-screen functionality). The trackpad and screen were on the same plane, which helped me link the the virtual spatial plane we navigate on a computer, and how our input devices (mouse/trackpad/tablets) interact with it.

Natural scrolling on computers is nothing new, it’s how touch screens already work, you push stuff in the direction you want it to go. Having reverse scrolling on a touch screen would feel totally unnatural. We should be doing the same thing with our mouses and trackpads, that we do with our phones and tablets.

Even on non-touch devices, many interfaces have panning tools, which work in the way we’d expect, you grab and push or pull content in the direction you want it to go. Reverse scrolling works in the exact opposite way of these tools, while natural scrolling uses the same intuitive concepts.

Instead of moving a scrollbar/viewport, which adds another conceptual layer, inconsistent with our actual viewport (the screen), we should be moving the content itself.

You may think, “Well I’m already used to reverse scrolling, it’s not worth it to change,” and while depending on your situation, I can’t conclusively say that it is, I can say it seems likely that the reduced cognitive load from using a unified navigational framework across devices will be well worth the small amount of frustration it takes to retrain your brain to go “natural.” Personally, it took me about two days at work to become totally used to natural. In those two days I probably spent about 5 minutes combined re-scrolling pages I’d accidentally scrolled the wrong way.

You can decide for yourself whether it’s worth it to switch (if you haven’t already), but I suspect it is, especially since natural is the way of the future anyway.

Standard