Your Custom Text Here
Why Integrating Sound Design Early in Game Development is Essential
When defining the role of sound in your video game, it is important to define early on sound design’s primary functions within your game. Is it enhancing immersion? Guiding players? Providing feedback? Increasing player’s awareness? Making the player feel powerful? Setting the emotional tone of the experience?
Sound has the ability to do all of these things and more.
The Importance of Early Sound Design Integration
In game development sound can play a crucial part in making a player feel immersed and emotionally engaged in an experience. It provides vital cues to guide and lead the player. It delivers tangible feedback to players when they are performing actions within the game. It can reinforce success or failures. It grounds them in your world, within the ups and downs of your narrative. It supports nearly every facet of your game.
There is no doubt that early integration of sound design within the game development process will lead to a more cohesive and polished final product. Bringing on sound design early allows a long term collaboration with the rest of your game’s team. Giving time to prototype, experiment and grow the audio pipeline alongside every other discipline in the development process.
Sometimes sound design in games is treated as a post-production process. Where sound is brought on as one of the last disciplines in the pipeline. This can result in less time for prototyping, iteration and experimentation. As well as less time to catch and fix unintended bugs and performance related issues. Resulting in a situation where audio isn’t as tightly integrated as it could be, and the game is not fully making the most of what sound design can bring to a game experience.
I can’t think of a single instance where bringing on sound design as early as possible wouldn’t result in a better sounding game and more importantly, a better final experience for the players.
Defining the Role of Sound in Your Game
When defining the role of sound in your video game, it is important to define early on sound design’s primary functions within your game. Is it enhancing immersion? Guiding players? Providing feedback? Increasing player’s awareness? Making the player feel powerful? Setting the emotional tone of the experience?
Sound has the ability to do all of these things and more. Defining which areas are most critical for your game early on will help make it easier for your sound team to focus on and plan their approach from the get go.
Is your game light hearted and cartoony in style? Chances are you’d want playful audio that supports and reinforces that aesthetic, with sound design that can delight and ease player’s into the experience.
Is your game a realistic military simulator? Then we’d likely expect gritty and grounded audio, with firearm-accurate sounding weapons. Where each bullet ricochet can help guide players to pinpoint exactly where they are being attacked from, and by which weapon.
Each of these games will have vastly different requirements and goals for their sound design.
Early Prototyping and Sound Experimentation
There are numerous benefits to early prototyping and experimenting when it comes to sound - as there are when prototyping any feature in the game development process. Getting some placeholder sounds up and running within the game as soon as possible can be extremely valuable and a time saver in the long run.
For example, if your game has characters that can walk or run, it is likely that there are also going to be different surfaces that they can walk or run on. Implementing a quick test environment with different ground surfaces set up can allow the sound designer to test this feature and fine tune the implementation of it before it impacts the larger project as a whole. Once this system has been perfected, it can be utilized and scaled for all the characters in the game.
Continuing from the above example, let's say that this game also has enemy NPCs that patrol until alerted by the player. It might be prudent to test the attenuation of the enemies’ footsteps so that the player has clear audio cues about where the NPC is located, even if they can’t see them. Good time to prototype this particular attenuation, but also how attenuations in general are going to be handled in the game.
These are just simple examples, but I hope I’ve got across the idea that prototyping audio in small, controlled stages will allow it to be fine tuned and perfected so that it can be brought over and scaled to the larger project with minimal hiccups.
Sound Design’s Influence on Game Mechanics
Sound design can directly shape and influence gameplay mechanics. For example, audio can help with environment navigation and puzzle problem solving. Some games have specific audio cues that hint at the presence of a hidden object or a clue to a puzzle. Implementing such audio cues in this case would allow the game designers more options when designing puzzles, knowing that they can utilize audio cues in guiding the player to complete a puzzle.
Audio cues can be a powerful way to reinforce some of your game's core mechanics. One of the main mechanics in the Dark Souls series, for example, is the parry. Where if the player times it correctly, they can avoid all of the damage of an attack with a parry. A successful parry is accompanied by a noticeable audible cue to inform the player that they have successfully performed that action. They then can adjust their strategy on the fly, knowing that they have an opening to go on the offensive.
Audio can provide clear status indicators, with sound signals like the sound of a heartbeat and increased breathing to communicate a player's health status being low. Using audio for cues like these means that the player doesn’t have to rely solely on visual cues, freeing up their attention for other aspects of the game.
These are just a few examples where audio can be used to shape and integrate with core gameplay mechanics that will enhance gameplay.
Creating an Audio Style Guide
An audio style guide helps maintain consistency in sound design throughout the game development process. It can provide direction and clarity to the sound design of the game. A style guide can also be an important resource to show a new team member that joins the project further along on its development cycle, helping them get up and running quicker.
A sound style guide could contain things like: sound palette, audio plugin chains, technical delivery specifications and audio references from other media.
Having an audio style guide in place can streamline the sound creation process as well as maintaining a sonic consistency throughout all stages of the game’s development cycle.
Technical Considerations and Early Integration
It’s very important to plan for audio integration within the game development process as early as possible. By asking questions such as: are you going to use your game engine’s default audio system? Are you going to use audio middleware and if so which one? Whichever option you choose, how well will it interface and scale with your game as it grows in scope and complexity?
Knowing the technical needs of the audio in your game will help determine what would be a good fit for your project. If you are making a relatively simple 2d platforming game in Unity, Unity’s default audio system might be more than enough for your audio needs. But if you’re making a large open world game, with lots of complex systems and requiring multiple sound designers, perhaps using audio middleware would be a better solution.
The Long-Term Benefits of Early Sound Design Integration
The advantages of integrating sound design early into the game development process are numerous. The audio team gets to iterate and grow alongside your game’s other systems, resulting in a tight final integration that will benefit gameplay and the final product immensely.
The audio will also have been tailored from the get go to match the emotional, aesthetic and gameplay goals of your game. Making for a more compelling and emotionally impactful experience.
On a technical level, it will also result in there being fewer audio issues and bugs as you get closer to launch. As audio will have had the time it needed in the oven to test, investigate and fix problematic issues earlier in the development cycle.
I would encourage more game developers to view sound design (and audio in general) as a core component of the development process from the outset. Treating it as such will result in a markedly improved final product.
Do you need a freelance sound designer for your game? Get in touch and tell me about your project today!
The Author
This article was written by Oliver Smith. A long-time remote working freelance sound designer dedicated to making gameplay enhancing sound for games.
Sound Design for an ARPG - Wolcen: Lords of Mayhem
I wanted to share a bit about my process of designing and implementing sounds for mobs in an Action Role Playing Game (ARPG) like Wolcen: Lords of Mayhem.
Sound Design for an Action Role Playing Game
As a freelance sound designer, I had lots of fun working on Wolcen’s Act IV. A significant portion of my work was designing and implementing sounds for 15 brand new enemy mobs. I was also trusted with designing the audio for the final boss of the game. So I wanted to share a little about my process of designing and implementing sounds for mobs for an Action Role Playing Game (ARPG) like Wolcen: Lords of Mayhem.
Mob Categories
Mobs in Wolcen are typically divided into two categories: trash mobs and elite mobs. Trash mobs are the weakest enemies in the game. They are not very threatening by themselves but often appear in large numbers. Which has a bearing on sound design decisions, which I’ll discuss in more depth further on. These mobs are essentially canon fodder. The player can often obliterate whole swathes of these enemies with little effort. Therefore their sounds needed to be approached in a way that would convey and acknowledge these factors. Generally, I’d make sounds of a short duration without long tails. Because they spawn in such large numbers, the frequency of their attacks in groups can be considerable. It was therefore important to make their sounds short and to the point.
Elite mobs, on the other hand, are more impressive, physically larger and more threatening to the player. They appear in smaller numbers, but have more elaborate visual effects on their attacks and abilities and do more damage. The sounds for the elite enemies therefore needed to be more impressive to match their stature. Because of this I could make sounds for the elite mobs more imposing, detailed and with longer tails. They could take up more sonic space.
An example of an elite mob attack - the large circular fire explosion
Pre-Attack Sounds
Pre-attack sounds usually are triggered at the start of an attack’s animation. Trash mobs usually don’t have a pre-attack sound. Their attacks are commonly low damage and therefore low priority for the player. As they are not normally a serious threat to the player, pre-attack sounds for them would be redundant and clog up the audio mix space that could better be saved for more important audio cues.
Such as Elite mob’s pre-attack sounds, for example. Most of the elite mobs that I worked on have a pre-attack sound. Because they are much more threatening than trash mobs, it provides a useful audible warning indicator to the player when they are about to launch an attack. Which can give them time to avoid the attack by dodging or using a defensive ability. In essence, it gives the player a chance to respond to the attack by listening to the audio cue.
Imparting Identity and Lore with Sound
Concept Art for two of the mobs I worked on: The Spectre Mage and the Republic Golgoth Flamethrower
In addition to informing the player of imminent attacks, the sound design of the mobs could also impart a lot of their characteristics and lore to the player. Wolcen’s world and story has a lot of lore behind it, which plays a big part of the game. Nearly all mobs in Wolcen are grouped into different in-world game factions. The lore behind these factions helped guide the direction for their sonic palettes.
Republic mobs tend to have good equipment and are technologically advanced compared to other factions. I emphasized this by using lots of mechanical and electrical sounds that power some of their weapons and armored suits. Some of the weapons they wield include rifles, gatling guns and flamethrowers.
The Cult of Souls are an undead faction made up of various skeletons and specters. Many of their skills are based around frost, ice and soul magic. Including frost and ice in their precast sounds would therefore help indicate to the player what kind of elemental attack would be heading their way. If they were hit by one of these frost attacks they would potentially be frozen for a short duration. I also recorded my voice to create breathy sounds that were used as building block elements for their soul magic attacks.
One of my Reaper sessions for a Cult of Souls Mob - The Soul Casket
I also tried to utilize sound to inform the player about the quality of a particular mob’s weapons and equipment. For example, the Spectre Lancer was wielding a very flimsy looking spear. So I made sure my sound design conveyed this by choosing rougher, blunter metal sounds and not using smooth, resonant shing sounds - which tend to convey sharp, well kept blades.
Audio Implementation with Wwise and CryEngine
During implementation it was very important to limit the voices of mob attacks and skills that had a high rate of fire. Not doing this could lead to machine gun-like effects of certain sounds triggering repeatedly during gameplay. Which is extremely undesirable. Becoming not only annoying for the player to listen to, but clogging up the audio mix in an undesirable manner and potentially masking other important game audio.
Creating variations of the mob attacks and skills was important. I strived to create variations that walked the line of being different enough to avoid the repetitive nature of a frequently cast attack or skill, but not being so different that the player could still identify each attack or skill from the familiar sound that it made. Generally speaking, trash mobs had more variations than elite mobs because of their faster rate of attacking.
The general process of implementation went like this. Firstly, I’d set up my sounds in Wwise. Paying careful attention to the randomisation parameters, attenuations and limiting of voices. Audio levels were roughly set at this stage. The final mixing of these would come later. After I had a chance to test my sounds in the game and iterate upon them until we were happy with them.
Next, using the Cryengine Audio editor, I would create audio events and link these events to the ones I had created in Wwise. Put simply, the CryEngine audio event called to the corresponding Wwise event.
Once these events were set up, implementation in CryEngine usually happened via one of three ways. The first, and most straightforward, was putting the events trigger for my sounds in the XML file of a particular mob. Each mob had an XML file that determined most of their game data.
There are many predefined ways to trigger a sound using this method, like OnAnimStart (which would trigger a sound at the start of an animation) and OnAnimCast (which would trigger a sound on the ‘cast’ portion of a particular skill or projectile). I believe these, and other parameters were created internally for Wolcen for the audio functionality we required.
The second way I implemented mob sounds was attaching a sound directly to an animation using CryEngine’s Animation Events. This was sometimes necessary to do when precise timing was required or when the XML method was not meeting our requirements for one reason or another. Implementing in this way allows you to scrub through an animation frame by frame and attach a sound precisely where you want it to trigger. I used this method quite a lot. It allowed a lot of flexibility in attaching sounds on a very granular level which increased the detail I could achieve. This was particularly helpful for elite mobs which had elaborate, extensive and multipart skills.
The third main way I implemented sounds was via particle effects. This allowed me to attach my sounds directly to the particle effects used by a mob’s attack or ability. A sound could be triggered by any of the layers in a particle effect, allowing for fine control over sound. For example, I could attach a fire sound directly to the fire layer in the visual effects. Which would allow for precise control over that sound’s triggering and duration. We might want a fire sound to fizzle out when the fire dies, yet there are still visual effects for smoke for example. This would ensure that our fire sound stops once the fire visual effect has been destroyed. This method of implementation was often used for the damaging portion of a skill - when a mob’s skill hits the player character directly.
Sometimes, there came a situation where none of those methods of implementation would suffice. Sometimes I would come across a situation which would require programmer support. This was the case for the Electric Fences created by one of the Republic Golgoth’s Flamerthrower abilities. This skill created a new game object in the world - an electric fence that could entrap the player. Because this skill created a new entity without an XML file, or other parameters I could easily modify, I had to work with a programmer to implement my sounds directly via code. This was quite a rare occasion though.
I had a great time working on the mobs for Wolcen’s Act IV. Because I had such control over the implementation of my sounds, it really made me think about implementation first. Which informed how I approached my designs before I even started them. I also got intimately familiar with how implementing audio works with CryEngine.
Do you need an experienced sound designer for your next RPG or action game? Get in touch and tell me about your project today!
The Author
This article was written by Oliver Smith. A long-time remote working freelance sound designer dedicated to making gameplay enhancing sound for games.