STORE COMMUNITY ABOUT SUPPORT
Greenlight is being retired. For more information on how to submit games to steam, refer to this blog post.
This item has been removed from the community because it violates Steam Community & Content Guidelines. It is only visible to you. If you believe your item has been removed by mistake, please contact Steam Support.
This item is incompatible with Greenlight. Please see the instructions page for reasons why this item might not work within Greenlight.
Current visibility: Hidden
This item will only be visible to you, admins, and anyone marked as a creator.
Current visibility: Friends-only
This item will only be visible in searches to you, your friends, and admins.
After playing SoundSelf, many players describe their experience as though they were interacting with another aware entity. "I did this, and then SoundSelf responded like this, so I did that too...." That's the result of player's perceiving the game responding to the subtleties of their vocal expression in as delicate and attentive a way as another mind might. And while we try to accommodate for nuances in player expression like that, it'd be impossible for us to dream up scenarios adapting to the whole range of vocal expression and intention behind them.
So instead we cheat: We take a limited range of player expression (tonality, rhythm of breath, "grittiness" of voice and vowel shape other ones we're working on) and obscure the way they affect the play experience. The result is that players instinctively know that the audio/visual dance is responding to them, and they make the assumption that it is responding to whatever aspect of themselves they are focusing on at that time.
The magic lies in that assumption, and my job as a designer is to feed players with enough datapoints to validate any assumption they may have. The player is smarter than the game, but the game's responsiveness invites the player to project their own mind into it, and to then imagine that what they are looking upon is a separate mind, not simply a reflection of their own. It's the same mental process as anthropomorphising a beloved pet, and it's what we're talking about when we describe the game as "a meditative experience."
nyaww, look how happy the Dolphin is!
For a certain kind of game, discovering the truth behind the datapoints is the Eureka moment that the game is designed to facilitate. Thanks in part to Jonathan Blow's in-depth discussions of his design values, this seems to have crystallized into a discipline of game designed that has taken flight in indie games like Braid, Antichamber, Storyteller, and many more.
But for another kind of game, that Eureka moment breaks the game's contract with the player. In SoundSelf, the conceit is that you are dancing with another mind, and if a player were to discover the rules that govern the behavior of that other "mind", the assumed depth would collapse entirely.
In narrative-driven AAA games, I see this contract broken all the time, and as a gamer I can't see past it any more. The AAA is getting better at this every year as they (slowly) climb their way out of the uncanny valley and include simulations of increasingly complex systems (e.g. anything pretending to be human) into their gameplay. Unfortunately, they compromise their promise of immersion as they appeal to the gamer's desire to conquer those systems.
To conquer a system and use it to their advantage (a trope of gameplay almost completely taken for granted in videogames), the player has to understand the intricacies of how it works. As any sociopath eventually discovers though, this just doesn't work with minds. Minds are too chaotic - the best we can do is adjust our assumptions and meet that chaos with our own.
To create a convincing simulation of mind, it must not fall into predictable "game-able" behavior. In "The Last of Us," each behavior pattern I came to fully grok from the AI reduced them from presumably aware entities, to merely cardboard buttons to be pressed. It pulled me out of the story. [sup]1[/sup]
Fortunately for game designers - humans tend to project their own mind wherever they both (a) identify a responsive system and (b) cannot fully understand the rules governing that system. We don't need to design a complex mind to simulate one - we can get away with designing responsive systems that also cannot be completely understood. Like reading meaning into a random Tarot draw, or seeing faces in the gnarles of wood, people will see mind everywhere with the right balance of predictability and chaos.
[sup]1[/sup] SPOILER ALERT: It could be argued that in "The Last of Us" the descent of the player's perception of the game's enemies from aware-entitites to cardboard-cutout-buttons-to-press effectively mirrors Joel's narrative descent into sociopathy, but I sort of don't buy that.
In 2009 I was studying film production at NYU specializing in sound design. If you'd met me around that time you probably would have recognized the film school swagger. It was in my last semester that I had a crisis of realization: When I look into the future of film, I don't see as bright a future as I do for videogames. Which one of these things do I want to be a part of? And don't videogames need sound designers too?
It was a really scary realization at the time. Had I been focusing on the wrong things all these years? Was it foolhardy to throw all my film education away to chase this impulse? I enrolled in the Game Center's very first class - a hands on design class using the prototyping tool "Virtools". When I presented the initial concept for Deep Sea (my audio-only horror game from 2010), I announced a truth that I'd been taking for granted for years:
"I'm not a visual artist." I kind of just don't do that.
Deep Sea, Experimental Gameplay Workshop, 2012. Photo by Matthew Wegner
Making a game that accomodated this weakness wound up being a useful design directive. But now I wonder... if I'd been directly asked "are you a game designer?" would I have shut the door on that too? The irony of Deep Sea is that the decision to not grow in one direction drove me to grow in another. Thankfully I was never asked that, because without ever really trying to become a game designer, I found myself well out of my comfort zone creating the most interesting work of my life. [sup]1[/sup]
Since then, leaping outside of my perceived comfort zone has become my first commandment for creative work, and SoundSelf is very deliberately an embodiment of that. It's my first commercial game as lead. It will be my first full-scale installation at Burning Man. It's a practical exploration of a new spiritual side of myself. It's a thematic exploration of geometry, which I formerly knew very little about. It is my first real attempt at creating something explicitly musical, and it's the weirdest sounding thing I've put my sound-designer ears to.
Before working on SoundSelf, I knew what beginner's mind was, and I knew that internalizing beginner's mind as habit would help the most beautiful parts of myself shine brighter. What I didn't realize though was that beginner's mind takes long deliberate practice and hard fucking work. SoundSelf is literally a daily practice of this for me: every day presents me with challenges I must meet humbly, as a beginner. It's not easy, because the challenges that contain the richest opportunities for growth are always always always the ones that scare the shit out of me.
Fear is a compass.
Topher Sipes is a graphic and performance artist, and a dear friend of mine. He's best known for being the creative director of the Texas dance duo Artheism. Sam Beasley dances on the stage, while Topher dances with her through an iPad connected to a projector. Their collaboration is technologically organic and often sublime. Topher's methodical creative approach was (and continues to be) an inspiration to me.
Topher brought to SoundSelf an apreciation of nature. Our first design meeting was a hallucinogen assisted hike through San Marcos, following the historical path of the San Marcos river and ending at the present location of the river itself. We talked about the fluid dynamics shared between rivers, shifting continents, and galaxies. The come-down brought us to a drum circle and group om. In the evening we recounted our experiences, taking copious notes and drawings.
For SoundSelf to be as responsive as we need it to, we needed to create an enormous generative possibility space that is beautiful and unique from a million different perspectives. The trouble was, Topher couldn't flow with the arcane language of our scripting system.
When we discovered that SoundSelf's art is fundamentally a scripting job, Topher and I went our separate ways. He's still involved on the project as a mind I value, but his contribution is much less hands-on. So we had an opening for a generative artist.
A digital painting by Topher Sipes
This was a very difficult opening to fill. We needed somebody with a great eye who could learn to program generatively, think exploratively, work mostly for royalty, and adopt SoundSelf as their almost-full-time passion project.
After a lot of deep searching, we found Todd Cook - a visionary mind who was eager to learn and pour himself into SoundSelf. He was interested in fractals, complex geometry, and visual perceptual processes. He brought to the visual space an order that had been formerly lacking. But the pressure of learning our scripting system was too much, and he too went his own way.
Both of these artists have left their mark on SoundSelf, and I'm grateful to have opened our canvas to their minds.
I think that as I struggle to grow as a student of beginner's mind, the most pernicious habits remain invisible even as I stare them in the face. While knowing consciously that fear leads me in the most beautiful directions, there are towering monoliths of fear and doubt that evade direct confrontation. I don't even know they're there until I hear the words come out of my mouth: I just don't do visual art. I'm just not good at that. I can't do that. No.
Here's what SoundSelf looked like at E3, before I took full ownership over the projects graphics:
And now that I've decided that I can be a goddamn visual artist if I want to be:
(Protip: look "past" the image to get the 3D effect)
Every time this happens, those monoliths of fear become easier to spot, and then to face, and I become a little richer for it. But that takes a lot of it's own kind of work.
The current development build of SoundSelf can be downloaded from http://www.soundselfgame.com.
For some reason I'm imagining myself as, like, a tomato plant or something. And this whole "I'm not a visual artist" thing is like plant-scar-tissue or something... y'know when you see a plant that looks like it just started spontaneously growing in a different direction? Anyway, the truth is that I didn't just grow reactively towards game design, but that Charles Pratt and Frank Lantz of the NYU Game Center were laying out copious amounts of fertilizer. I'm done with this tomato plant metaphor now.
Watch this short scene from "Stranger than Fiction":
Describe to yourself what you saw.
I've spent most of my short career creating and implementing sound effects for games (Antichamber, Capsule, Orcs Must Die 2}. Humans are visual thinkers, and to at least some degree we critically analyze what we see. What I find so exciting about sound design is that nobody's thinking about it - they don't even realize it's there. That means sound designers can inform the player's mood and thoughts in a particularly subversive way. The clip above is my favorite example of an established subversive technique called "synchresis." This is a trick used by sound designers all the time when they want to lend the associations of one object to another. The construction truck in the clip above reminded you of a T-Rex because it was layered with the sound of one[sup]1[/sup].
These perceptual tricks get me really excited. Synchresis is a particularly well studied method (the term was invented by Chion for his wonderful book Audio-Vision), but there are lots of others and we're inventing new ones all the time. In the interactive arts, we can go even further and deeper in how we trick your subconscious!
You can forget I told you anything about synchresis now if you like. It'll work out better for both of us this way...
SoundSelf's non-representational audio is a playground for me as a sound designer to experiment with using novel techniques to get under the player's skin. Some of these techniques are on the scientific fringe - areas of research that haven't yet been rigorously tested, but have either a rich tradition in ritual or show anecdotal promise for healing.
Unfortunately it's just not realistic to tape a rigorous scientific study on top of our game design process. That said, by maintaining an open minded and a casually empirical approach to development we're finding a cocktail of techniques that draw the player into a trance state for SoundSelf. Right now we're experimenting with a combination of binaural beats (playing slightly different tones in your left and right ear which produce a 5hz - 15hz beat when summed by your brain) and shamanic drum rhythms to help guide the player into and out of the experience. Whether and how well these work remains to be seen, but our early tests are very encouraging.
There's one technique though that's clearly very potent for intimately connecting the player with the experience: SoundSelf's music is made up of long drones whose pitch is determined by the player's own voice. By matching the player's pitch with a sound that is at least not drastically different from a human voice, it feels to the player like the game's audio is emerging from their body. This allows us to do some really magical things! By associating timbre change (say, from a cello to a synth) with a visual change, the player feels like their own voice and body is morphing in step with the game. It's quite trippy.
Try chanting along to the video below, or trying out the Kickstarter build of SoundSelf to see for yourself.
[sup]1[/sup] We all know what a T-Rex sounds like because Gary Rydstrom made it up for Jurassic Park.
It seems like a catch-phrase now to say "The Oculus Rift is the future of gaming" - and as skeptical as I am around phrases that get repeated until they lose their meaning, I believe this one to be true. As a developer of a Rift game, there are few people as sunny-eyed as myself, so you'd be right to read this with an eye for bias. But with the experience of putting the Rift build of SoundSelf in front of players at E3, and finally witnessing the VR experiments of other artists, my confidence in this piece of hardware and the company making it is blossoming, and I'll say it myself:
The Oculus Rift is the future of gaming, but not for the reasons you think.
There's been plenty of discussion elsewhere about what makes the hardware so awesome. And they've done a better job than I could, because what makes it awesome is that it *just works*, which is difficult if not impossible to explain to someone who has not experienced it. Instead I'm going to tell you what I see growing around the hardware.
I've been using this blog to explore what I see as a new genre in videogames that I'm calling the "Videodream." Videodreams are explorative (Tenets of Videodreams, Part I), they forsake goals and story as the prime organizer of player experience (... Part II), and they are musical (...Part III). Proteus may have been the early prototype for the videodream, quietly shattering a barrier of what designers (and some consumers) think games are for.
That this burst of experiential gaming is happening in parallel with an inexpensive evolution in immersive hardware is a marriage that bodes well for the videodream, for virtual reality, and for the future of videogames as an expressive medium altogether.
Limitations are extremely valuable for thinking creatively, but they become insidious when they're taken for granted and left unquestioned. We have few limitations as pervasive and unquestioned as the screen, and in breaking those shackles as thoroughly as the Rift does, designers are invited to break free from other invisible limitations too. From a simple rocket trip past the moon, to explorations of abstract imaginary spaces, to a guillotine simulator - we're already seeing a fierce amount of originality in these prototypes. Perhaps I've already defined "videodream" too narrowly?
These unusual VR poems may be a spasm of exploration, laying a path for traditional gaming to grow into as it slowly embraces VR. But they could just as well be the first strides of an outward rush to discover new possibilities for virtual reality and gaming as a whole!
Don't get me wrong, VR dramatically raises the bar for goal-oriented experiences too. EVR and If A Tree Screams in the Forest are two excellent examples that not only use the Rift's mind-enveloping 3D to sink the player into the game's fiction, but do so by integrating head tracking technology into the core mechanics. The results are extraordinary. In EVR, just being able to look around makes the massive 3D battlefield comprehensible without using an awkward map. Does that change gaming forever? Probably not... but it's frickin' awesome!
However, since videodreams are organized around the sensory experience (as opposed to goals and the intellectual path towards achieving them), they don't just perform better in an immersive environment, they require it. Playing Proteus without David Kanaga's magical score, or while tending to a mental list of chores, is not playing Proteus at all. And while the island's beauty guides the player into a quiet mental space, like any other form of hypnosis it only works if the participant is willing.
There's a double-challenge here for videodreams. Like any immersive game, we have to isolate the player. The Rift checks that box beautifully. More difficult though, we must guide the player to let go of their search for extrinsic goals and rewards. That one's not so easy because our industry has trained players to look for goals as the scaffolding from which to build their gaming experience. We essentially have to re-frame the player's whole relationship with the game, ideally with more elegance than telling them directly. And here's where the videodream finds its own extrinsic reward in the Rift…
Gamers love innovation. I've seen mainstream AAA gamers get just as excited about SoundSelf as weirdo indies and hippies do... once they've suspended their expectations for a moment. The Rift is a gateway into the weird, but it's one that fits neatly into the narrative of gamers' dreams since the 80s! This is the day gamers have been waiting for, and they're eager to see what new experiences are waiting for them on the other side of the door. They want to be surprised, they want to try something new, they're asking louder than ever "what's next?"
All avant garde developers have to do now is amaze them.
September 6, 2013 - RAGameSound
Two weeks ago I began describing patterns in what I see as an emerging videogame genre I'm calling "Videodreams". Examples of videodreams are Proteus, Panoramical, Pixeljunk 4am, Frequency Domain, and our own experiment SoundSelf. They are characterized by wandering exploration (Tenets of Videodreams, Part 1), non-goal-oriented gameplay (...Part 2), and an embrace of the abstract (...Part 2). They are designed to be experienced - not beaten, explored - not understood, felt - not thought.
The final distinguishing commonality I see in these experiences is persistent interactive musicality: music is woven into these games as an integral and inseperable part of the experience. Unlike most music games however, the music does not define rules by which the player succeeds or fails, but is a canvas of free exploration.
Some of the games I've identified above define a feedback loop between the player and the musical experience. SoundSelf (a game in which the player's voice drives a synesthetic light and sound show) and Panoramical (a game in which the player uses midi faders to define an audio-visual landscape) co-create generative and reactive music with their players in which the audio-visual experience hypnotically inspires interaction, while that interaction in turn impacts the audio-visual experience.
By comparison, while Frequency Domain (a game in which the player flies through a landscape generated by a music track's FFT) doesn't fixate on an audio feedback loop between the player, the visual interaction (flying and spinning your ship, jumping between frequency peaks) is itself a dance with the music.
While music is core to the experiences that inspired me to begin talking about videodreams, I wonder if music is essential to creating these dreamlike experiences, or if it's just a particularly powerful tool for grounding the player in their body and in flow with the moment. Musically minded game developers use rhythms to entrain mental activity, or to subversively set a particular mood without the player noticing (and then intellectually engaging with) how these changes in their mind/body are happening. I wonder, does musical interaction help quiet the consciously critical/intellectual mind, thereby helping the player achieve a zen-like state?
It's important when discussing genre not to let the definition of what is or is not x limit the conversation. These boundaries are porous and subjective. The Bit Trip series and Super Hexagon both come to mind as experiences that draw the player into a dream-like state, and they do so with fast-paced goal-oriented gameplay.