Ideation Tool – Redesign & Interactive Prototype

Previous Iteration

 

Call for Interaction

Having in mind Mobile Devices peripherals, I set onto creating a more playful interaction. An interaction from the intersection between common gestures in the real world and Mobile Phone's accelerometer.

I've borrowed two playful gestures from daily observations the reflexive spinning and the swift juggling. Two gestures that could be technically feasible and experientially engaging.

UI Redesign

I've decided to simplify the interface towards the new experience. The accelerometer remixes the images, and a tap shows the prompt from a generated text. This text is a computational mix from the description of the objects shown on screen. This way, people can be inspired visually and textually.

Interactive Prototype

The interactive prototype was made in Javascript using Cooper Hewitt Museum's API, a RiTa a toolkit for computational literature and the p5js library. This is where the Interactive Prototype can be experienced.

Through Javascript, I'm retrieving all the data from the Cooper Hewitt Museum including images and text from their online exhibition data base. I clean the information and select a topic, in this case, all objects in the museum related to 3D Printing

Gestures

Turns out the spinning gesture is one of the blind spots in Phone Accelerometers. This is why, the prototype will only respond to juggling-type gestures

Text Prompt

By retrieving the descriptions from the 3 objects shown in the screen, I create one phrase by remixing the tokens through a set of computational procedures. Every time the images shown change the tokens by which the phrases are created change. 

Even though the prompted phrases have grammatical errors, embracing the computational glitchiness aligns with the overall playful and mind diverting concept of overcoming a creative block.

Ideation Tool from Cooper Hewitt Museum API

This project is an ongoing pursue around the question of how to overcome a creative block? Partnered with Lutfiadi Rahmanto, we started out scribbling, sketching and describing the problem to better understand what it meant for each of us and how do we scope this problem and usually respond to it.

UX Research

From the first session we were able to narrow the idea onto a determined goal: A tool to aid inspiration in the creative process. This led us to consider various things around the sought scenario and allowed us to start asking other creatives around this. We sought to better understand –qualitatively– how creatives describe a creative block and more importantly how is creative block overcome? From this session we were also able to reflect on how to aid that starting point of ideating, often a hard endeavor. A resonating answer in the end, was through linking non-related words, concepts or ideas.

We also researched two articles with subject matter experts about Creative Block and Overcoming It ("How to Break Through Your Creative Block: Strategies from 90 of Today's Most Exciting Creators" and "Advice from Artists on Hot to Overcome Creative Block, Handle Criticism, and Nurture Your Sense of Self-Worth"). Here we found a collage between our initial hypothesis with additional components such as remix, from Jessica Hagy's wonderful analogical method of overcoming her creative block by randomly grabbing a book and opening it a random page and linking "the seed of a thousand stories". Another valuable insight was creating space of diverted focus from the task at hand generating the block. We also found a clear experience-design directive for our app, to balance between constrain –structured scrambled data from the API– and freedom –imaginative play–.

Brief, Personas and Scenarios

After validating our intuitive hypotheses on how to address the problem through the contextual inquiries and online articles we came up with a solid Design Brief:

Encourage  a diverted focus where people are able to create ideas by scrambling data from the Cooper Hewitt's database into random ideas (phrases). 

Through this research we created seven different behavior patterns and mapped them onto this two-axis map, that defines the extent to which personas would behave between casual/serious and unique/remix

For a more detailed description of these archetype behaviors visit this link

This enabled us to create our guiding design path through what Lola Bates-Campbell describes as the MUSE. An outlier persona to direct and answer the usual nuances behind designing, in this case, our mobile application tool to aid Mae Cherson in her creative block. We determined her goals and thus her underlying motivations, what she usually does –activities– during her creative environment and how she goes around between small and greater creative blocks in her working space. We also describe her attitudes towards this blocking scenario and how her feelings entangle whenever seeking for inspiration. There were some other traits  determined as well that can be reach in more detail through this link.  Overall we crafted this Muse as a reference point for creating an inspirational experience for the selected archetypes –The Clumsy Reliever and The Medley Maker–.

Engagement

Parallel to the archetypes mapping, we began thinking how to engage our audience –Artists, Designers, Writers, Thinkers, Makers, Tinkerers, all poiesis casters–. Soon we realize the opportunity of captivating our audience through a game-like interaction. A gameplay that requires simple gestures and encourages discoverability. Some of the games we took as reference are Candy Crush and 2 Dots. Two simple games that have out-stand for their heavily and widespread engagement.

Wireframe Sketches

By having research cues and possible game-like affordances in mind there's proliferous space to weave tentative design solutions. Hence we made a couple whiles to sketch layouts, concepts, poetic interactions and nonsense infractions.

On the other side we created sense and sought a balance between amusement and feasibility. At the end of this session we came up with three Design Layout Concepts and general Affordances (call to interaction): Linking, Discovering and Dragging.

Test Insight

From these concepts we started making interactive prototypes. While creating the Discovering prototype, we realize people's intuitive mental model beneath a Candy Crush-like interaction did not match with our design intent, and trying to match it resulted overly complicated and forced. This is why we created prototypes for the Linking and Dragging concepts.

Prototypes

Another prototype explores the underlying preference between text-driven inspiration and visually-driven inspiration. While testing these prototypes we realize some people tent to feel more inspired by imaging the words from a text, and other people feel more inspired by visual queues. This prototype allows both explorations.

The next step is to select one gameplay interaction from our user tests and sintactically address the text data from the API. 


This is another interaction mode –Remixing Mode–, thought after Katherine's valuable feedback on our final prototype that can be accessed in this link.

Box Fab

Concept

We decided to work with live-hinges for our first project. We started off by concept proving through black foam.

Tests

After some tests, we chose the "parametric kerf #6" pattern given to its generous flexibility. For our overall box concept we combined the live-hinge method with a for dice semi-cubed volume. The next step we took, was to start cutting the two apparently replicated pieces.

Insight

However, our estimates for covering the half circles was inaccurate, avoiding the planes to fully assemble one-another.

Fabrication

For our second iteration, we follow Eric's advice and jump to prototype with our final material, wood. This we planned and did a little calculations to make sure the sides height would match to the half circle perimeters. We also planned for 45º edges, so we created 5mm inner reference raster-edges to sand after cutting. Since the material is 5mm thick, we realize that for 45º edges we needed a "square" reference to more less know our limit when sanding off the residue.

On our second laser cutting attempt, we came around with some technical unexpected obstacles. Besides overestimating the setup a bit high, the machine also cut offset (unknown reason still). Last but not least, the 60W laser cutter settings are different from the 50W when it comes to edging/rastering with black. This third setback was in fact a happy accident that allow us to realize we could simplify the entire process by scaling one of the sides by the thickness of the material. Our third cut run quite smoothly.

Error Correction and Experimentation

We even explore ways of conveniently bending wood with warm water and overnight drying. The result wasn't perfect, but we now know how to make a perfect matching wood bending from what we learnt with this first experiment. In the end, our thought magnetized-closing lid wasn't necessary. This is our final prototype, along with our inspirational dice. 

Result

Generative Soundscape 0.1.2

This installation pursues playful collaboration. By placing the modules through arbitrary configurations the idea behind this collective experience is to create scenarios where people can collaboratively create infinite layouts that generate perceivable chain reactions. The way to trigger the installation is through a playful gesture similar to bocce where spheres can ignite the layout anywhere in the installation.

 

After an apparent success –context-specific– and consequent failure –altered context– the project turned onto a functional alternative. The next process better illustrates it.

These images show the initial thought out circuit that included a working sound triggered by a –static– threshold. We also experimented with Adafruit's Trinket aiming towards circuit simplification, cost-effectiveness and miniaturization. This shrunk micro controller is composed by an ATTiny85 nicely breadboarded and boot-loaded. In the beginning we were able to upload digital output sequences to drive the speaker and LED included in the circuit design. However, the main blockage we manage to overcome in the end was reading the analog input signal used by the microphone. The last image illustrates the incorporation of a speaker amplifier to improve the speaker's output.

The next two videos show

1. the functional prototype that includes a hearing spectrum –if the microphone senses a value greater than the set threshold, stop hearing for a determined time– 

2. the difference between a normal speaker output signal and an amplified speaker output signal. 

After the first full-on tryout, it was clear that a dynamic threshold –the value that sets the trigger adapts accordingly to its ambient–. The microphone however, broke one day before the deadline, so we never got to try this tentative solution –even though there's an initial code–.

Plan B, use the Call-To-InterAction event instead. In other words, use collision and the vibration it generates to trigger the modules through a piezo. Here's the code.

A couple videos that illustrate the colliding key moments that trigger the beginning of a thrilling pursue.

 

And because sometimes, plan-b also glitches... Special thanks to Catherine, Suprit and Rubin for play testing

 
 
 

Generative Synthesizer Prototype

This is a followup in the Generative Propagation concept. What I intended to answer with these exercises are two questions:

  1.  How can the trigger threshold be physically controlled? (How can the mic’s sensitivity be manipulated?)
  2. How can the tempo be established? (How often should each module emit a sound?)

The trigger threshold can be manipulated through manually controlling the microphone’s gain or amount voltage transferred to the amplifier –Potentiometer to IC–. 

By manipulating this potentiometer, the sensitivity of the microphone can be controlled.

The tempo can be established through timing the trigger’s availability. By setting a timer that allows the a variable to listen again, the speed/rate at which the entire installation reproduces sounds can be established.

UI Draft #2 BCI & Processing

This is the Interactive Wireframe so far, for my BCI Interactive Installation. Basically I'm trying ways to better communicate what's going on when using the Mindwave, and how can we translate its signal into a more structured task. The code for this UI Wireframe can be found in this Github Repo.

Morse Code Translator

Inspired by the "Hi Juno" project, I sought an easier way to use Morse Code. This is why I've created the Morse Code Translator, a program that translates your text input into "morsed" physical pulses. One idea to explore further could be thinking how would words express physically perceivable (sound, light, taste?, color?, Tº)

So far I've successfully made the serial communication and the Arduino's functionality. In other words, the idea works up to Arduino's embedded LED (pin 13). This is how a HI looks being translated into light.


Followup, making the solenoid work through morse coded pulses. You can find the Processing and Arduino code in this Github Repo.

Servo Tinker Application

 

After tinkering a conventional servo to read its position data, I'm still figuring out a way to apply this feedback reading into an aesthetic application. Even though I'm unsure on how can I implement this into the former concept, it certainly sparks interesting interactive possibilities. The code can be found in this Github Repo.

I also started a sketch around a servo triggered by a digital input. When triggered, the servo moves across a 30º range, back and forth. The idea to explore further, is to module its speed by an analog input, and maybe add a noisy (perlin most likely) effect.

UI Draft #1

With the opensource JAVA toolkit Processing, I started exploring around User Interfaces, Time representation and Hover Timing. Hover Timing, might bring intersting possibilities for Natural User Interface such as the Kinect or Leap Motion, where different affordances come into place wtih simple tasks like selecting an element. The code for this draft can be found in this Github Repo