In this post, we are going to learn how to mix beats in FL Studio.
When first starting with music production, your first step is learning how to make beats.
The next step is to learn mixing in FL Studio.
I will also cover the basics of mastering with you, but quickly discussing the difference between mastering and mixing.
I hope this FL Studio mixing guide solidifies some topics in you! (As well as prevents you from buying unnecessary products).
What You Will Learn
- Why do we even mix beats in the first place
- The difference between mixing and mastering your song
- What to aim for when mixing (metering, gain staging, headroom, and end result.)
- Setting up for a fast mixing workflow
- Fundamentals – EQ, Compression and Volume
- Series vs. parallel mixing and techniques + plugin position matters!
- Popular mixing tools for high quality mixes (stock plugins vs. third party plugins)
- Advanced techniques (sends, distortion, and automation.)
Keep Learning with My FL Studio Course + Book!
Learn FL Studio Properly, Fast, with Proven Results!
Beginner’s Course – [20 Videos @ 4.25 Hours Long]
Beginner’s Book – [141 Pages + 71 Images!!]
What’s the Point of Mixing Beats in FL Studio?
Here’s something interesting to ponder:
Recorded audio and digital audio are two different topics!
Digital audio is super clean sounding by itself.
Sounds that come from VSTs and awesome sound kits.
We may still EQ and compress our digital signals, but simply to make space in the mix, or for creative choices.
But comparing digital to recorded audio now:
Recorded audio all depends on the actual recording.
What if there was background noise such as a computer fan, or the vocalist was too far away from the microphone?
These all impact the end result of that recording!
If you had a poor recording, you will need to do more extreme processing with EQ and other mixing tools to achieve a high quality result.
Your digital signals are already high quality – maybe volumes just need to be adjusted?
However, I do suggest you do some tweaks to your track to add clarity to your mix!
It’s really easy to go over board when you start mixing your beats.
I recommend saving your original beat so you can retry again!
Difference Between Mixing and Mastering in Audio Production
What is Mixing in Music Production:
Mixing is when you take all the individual sounds and try to balance them in volume and EQ.
Let’s say you have 5 instruments, some drums and claps, and some hi-hats.
You would route all of these sounds to their own mixer insert and apply volume changes, eq adjustments, or compress a sound for consistency or creativity.
Your choices can alter how a song sounded originally very much!
Sometimes it’s just a matter of a few quick changes to help emphasis certain elements of a song.
All these instruments, which are on their own mixer inserts, actually get summed into one mixer insert!
This is called your MASTER bus.
As a mixer, you want to leave what they call headroom, as this allows a mastering engineer to do his job and not handcuff their choices.
Headroom is making sure your loudest peak in your song is no louder than a certain volume.
I’ve had a mastering engineer suggest to me no louder than -3dB, but I’ve read online some mastering engineers ask for -6dB!
What is Mastering in Music Production:
Mastering is when you take your final mix, with the allowed for headroom, and bringing up the volume to a commercial loudness.
Don’t get me wrong – there are many other aspects to mastering which we will cover shortly.
There is a thing called the loudness wars, which is slowly ending as new standards and measurements are being brought in.
So loudness is by no means your only goal of mastering.
Lets list some other aspects of mastering, that I know of:
- Intro / Outro Times – Adjusting the fade in and fade out of the song, as well as time spacing from one song to the next. (Can have a big impact on emotion of an album!)
- Volume Consistency from Track to Track – This is more in regards to a full album/beat tape. You want to make sure the listener does not have to adjust their volume knob from one song to the next.
- EQ Balance – Similar to consistent volume, you want to have a decent balance in EQ from one song to the next. You don’t want one song super bright, and one song super dull.
- Embedding ISRC Codes – ISRC codes allow your tracks to receive proper compensation of revenue/royalties! It’s a unique number assigned to an individual song – each song gets its own ISRC code!
- A Trained Set of Professional Ears – Having a second set of trained ears to form an opinion on your track is important, if you have the money! They may adjust the song’s EQ balance as an artistic choice, and should have a properly treated room for accurate monitoring.
The difference between mixing and mastering is pretty much summed into this:
– Mixing is working on the individual instruments/sounds of a track
– Mastering is working on the final mix – polishing up the mix to be released publicly.
For more info on mastering, you can read how to master a song in fl studio.
What are we Aiming for in Mixing? (Goal / End Result)
You will hear people say:
Mixing is an Art.
Yes, this is true. But really, you can say this about most things in life?
I’ve been an electrician for many years, besides doing music production.
We bend metal pipe called EMT. If done beautifully, people also call this “an art”.
I’m just trying to build-up your mindset as a producer/mixer.
You can only be an artist if you have knowledge of what you’re doing.
Then you can bend the rules!
So before we get all artistic with our mixing, we have to figure out what we are aiming for while mixing.
Here are a few audio mixing tips that I feel are essential to be aware of:
- How to monitor the loudness of your mix
- What mixing tools are available to us?
- What volume should you be listening to your speakers
- How to compare your music track to your competition
- When do you know your mix is actually complete?
Measuring Audio Loudness:
When I say loudness, I am not talking about the volume you set your speakers.
I am talking about measuring the loudness of one track to another in regards to time and frequency content, and dynamics.
I’ve already mentioned the loudness wars.
If you’re totally new, you must watch this video on the loudness wars.
This is important to understand because big companies like YouTube, Spotify, and other streaming services are starting to implement loudness standards.
This is a bit off topic from this guide on mixing, as this is more of a mastering subject.
But to fill you in:
Audio is a very hard source to measure.
Let’s say you have a loud gun shot and a violin note.
The loud gun shot will have a very loud peak, but quiet body to it.
Whereas the violin’s volume will have a more consistent volume – the beginning of the sound sound will not be extremely louder than the body of the sound.
If we match both sound’s peaks together, the violin would actually be WAY LOUDER than the gun shot.
This tells us measuring the peak of your audio is not an accurate way to see loudness.
Then there was RMS – this gives us mixers a much better idea of loudness.
But there’s also a problem here:
RMS isn’t super accurate either.
Certain frequencies, like low frequencies, can make your track seem louder than it actually is!
Step in LUFS.
This is the current loudness measuring standard that we use for loudness monitoring.
It takes into account frequency content, and measures a song’s loudness over time.
Some frequencies are louder to our ears than others. That’s why measuring frequency content is important!
You will read about ATSC A/85 and EBU R128.
For more info, read TCElectronic’s article on loudness explained.
But again, there are also some kinks to work out, as Thomas Lund of TC Electronic has an amazing pdf of a case study that -23 LUFS just may be too quiet for our generation.
Also it seems from country to country there are a few minor differences too between the standard..
That should bring you up to speed on the current trends of mixing, as things are starting to change.
The loudness wars soon will be over.
The reason is because in mastering, if you push your volume very loud to compete with others, it wrecks your transients, skewing the impact of your actual song!
If these streaming services are only playing audio at a certain level, they will actually turn down the volume of these loud songs.
But guess what?:
Since the loud song’s transients are already clipped off by their processing, you’re stuck with a sausage audio file playing at the same volume of a beautiful mix with preserved transients.
I’d also like to suggest a very iconic person in this industry fighting for loudness.
He is a mastering engineer that has started Dynamic Range Day – making us producers aware of loudness, and the harm to our music.
So with that said, I personally monitor my music in a couple of ways:
My mixing monitor tools:
- Youlean Loudness Meter – I tend to only use this at my mastering stage, but it’s a FREE LUFS meter to get an idea of how loud to master your tracks.
- FabFilter Pro-L – This is my absolute favorite limiter, it also has Bob Katz’ K-Metering system, which is another monitoring method of loudness. (FabFilter makes very premium plugins.)
Proper Mixing Volume for Speakers
You only have one set of ears. Protect them!
Mixing at loud volumes for long periods of time is a great way to damage your hearing, and it’s a great way to get an inaccurate mix!
You should be mixing at quieter volumes – comfortable is a good word here!
But this isn’t to say you should not mix at loud volumes, though.
I tend to set my volume quieter, but still loud enough to hear what’s going on.
Once you’ve got a rough mix going on, I would suggest turning up your volume to see how it sounds at a louder volume!
After turning up my volume, I actually walk around my studio for a bit.
Since I have a dog, this makes a great chance to lay down and cuddle with the guy lol
If I notice anything while laying down scratching his chin, I’ll get up, make a quick adjustment, and go right back to scratchin’ !
Compare your Mix with Reference Tracks
I’m super bad at using an example song to base my mix off of.
I do this for two reasons – which may be poor choices though:
- I mix what I think sounds good – I do not want another’s creativeness to impact my own. On a professional level though, you should at least use a reference track when your mix is done to see how your mix compares to the commercial release. If your track is poorly mixed, it will be very hard to release your music to radio stations etc.
- I tend to release volumes / albums – Because I release a bulk collection of songs at a time, I mix/master these as a group. If someone listens to my album, these songs match in consistency in terms of volume and EQ for the most part. But you should base a reference track in there, just to see where you’re at.
The whole idea behind reference tracks is to compare your mix to a commercial release.
Since these are professionals in the industry mixing these songs, you can compare your track to theirs.
A simple A/B comparison is what you do here.
You can simply set this up in the mixer and solo out each song to see how your song compares.
Maybe you need to boost yours louder, or maybe add some high-end?
But an important point regarding reference tracks is making sure their loudness is what you’re wanting to achieve.
You do this by playing their song, using the free LUFS meter by Youlean, and watching the Integrated LUFS.
When is your Mix Actually Done?
In my opinion, when a song is balanced and has the emotion you are after.
When nearing the finishing touches to a song, I really focus my intro to ending impact.
I start the song at the beginning, get up out of my chair, walk around and listen through the whole song until something makes me stop.
This could be poor arrangement, a weak build-up/transition, or maybe a snare is just a bit too loud.
I will fix the particular element, and start again.
I keep tweaking my mix like this until I feel I have the balance and emotion I am looking for.
Setup Tips for Improving Mixdowns
Before you start any mix, I do recommend the following:
You may feel this is boring – but with a large song, this saves time later in your mix!
- Setting up an FL Studio Template – I have created a template if you’d like to download. It is set up come mix time, allowing you to use sends like reverb, delay, and distortion for quick mixing techniques.
- Color coding your sounds before adding them to the mixer – Colors help so much! I don’t have specific colors every time, but for each song, I do select certain colors for certain elements. Drums would be green for example, snares could be orange.
- Make sure every sound has its own mixer insert – This allows for ultimate mixing flexibility! If you have layered snares, and one snare is overpowering the other, it’s easy to reduce its volume.
- Creating groups for easier adjustments – I tend to group certain elements. Let’s say I have three drums. They would all be on their own inserts, but I’d route them to another mixer insert which controls all three. Again, if one drum is too loud, I can easily single it out. But if all drums are too loud, or I want to apply compression to all drums, I can add a compressor on that drums group!
Creating an FL Studio Template
I actually have a full walk through on creating a template in fl studio.
Over the years, you figure out how you work.
Simply go back and tweak your template to keep optimizing your fl studio workflow!
Color-Coding Sounds Before Adding to Mixer
I’m not sure how others DAWs do this, but FL Studio makes this incredibly easy.
Make sure you color your sounds before you route them to your mixer.
The color and label follow when routed to the mixer, which saves an extra step!
Mixing Fundamentals – Volume, EQ, Compression
The most important part of mixing is first analyzing the track.
Hear what you’d like to bring out, and take note of what you’d like to fix.
I want to introduce you to the 3 most fundamental mixing tools:
- Equalizer (EQ)
- Compression (Dynamics)
Before you get into any plugins, you should first start with volume.
Adjusting Volume while Mixing
Not everything can be front and center in music.
This is why I said you must first analyze your track to see what you want to stand out.
This can change as you mix!
As I actually make beats, I adjust my volumes as I go.
But if a client sent you files to mix, I suggest your first step is analyze the track, and adjust volume as needed!
Using EQ – Different EQ Techniques
Understanding the basics of EQ will go a long way.
We will first talk about what an EQ is, and then discuss different EQing techniques like:
- Subtractive EQ
- Additive EQ
- Making space for other instruments
- EQ frequency chart
So just what is an EQ, and how do you use it?
My favorite stock EQ plugin in FL Studio is the Fruity Parametric EQ 2:
With the FL Studio 12 update, they are making most of their plugins with vector-based GUI’s.
This means you can resize them to fit your whole screen with high quality resolution!
To the left of the EQ is your low frequencies; your bass sounds (purple band 1).
As you work your way to the right, you move into the mids (yellow band 4), and high frequencies (blue band 7).
When I first started, I thought if I used band 6 to boost the low-end, it’d give me a different sound.. haha
It doesn’t work this way.
A parametric EQ is awesome because it gives you multiple bands to focus on certain frequencies.
You can be precise, or have a wide boost, depending on how you’ve set your band’s Q:
Band 2 is a narrow Q – Band 6 is a wide Q.
A narrow Q allows you to hone in on certain frequencies, usually good for removing nasty frequencies you don’t like.
A wide Q allows for a gentle increase or decrease over a large range of frequencies. It’s harder to notice, and sounds more natural.
So if you were to use band 6 to boost your low-end at 70Hz, it’s the exact same as using band 2 to boost at 70Hz.
If you want more information on EQ (also known as filters), you can check out my premium course All About Filters.
Subtractive and Additive EQ
A technique for EQing instruments is called subtractive EQ.
This is kind of backwards thinking.
If we want a sound brighter, we boost the highs, right?
But with subtractive EQ, instead of boosting the highs, we may reduce the mids, which in turn, boosts the highs!
This is a popular technique for allowing more room for other instruments to breath.
By reducing certain frequencies, you’re allowing other frequencies to stand out.
For myself, I tend to use both additive and subtractive EQ.
I really just move my EQ until my desired result.
Sometimes this does end up being extreme settings, but if that’s I feel my song should sound, that’s what I dial in!
As you continue, you will find what works for you and what doesn’t.
That’s why I recommend saving your original song first before you get into mixing as you’re learning!
Making Space for Other Instruments
As mentioned with subtractive EQ, we are allowing other frequencies to breath because we are reducing certain frequencies, right?
Well what if you have two high frequency synths which are competing for the same space?
A cool trick is to reduce a certain frequency and boost another frequency on one synth.
On the other synth, you boost and reduce the opposite frequencies!
You can learn more about making space for your instruments in my post:
EQ Frequency Chart
Sometimes these frequency charts are cool to look at just to see where certain instruments land in the frequency spectrum.
You can see an EQ frequency chart over at Howtomakeelectronicmusic with Petri.
How to use a Compressor – What is Compression?
Compression is probably the trickiest topic of all!
Even after 8 years of producing, I still don’t understand compression on a high caliber level.
But here’s the basics:
Most compressors have these basics settings:
When your audio signal goes over the threshold, the ratio determines how aggressive to turn down the volume over the threshold.
Now this gets tricky with the attack and release.
Let’s say your signal is at -10dB, and your compressor’s threshold is set at -15dB.
This means your audio signal is 5dB over the threshold.
That 5dB will get compressed depending on your ratio.
If we do simple math – not including attack and release – and we set our ration to 2:1, this means our audio signal should be -12.5 since 5dB divided by 2 = 2.5
But this is why compression is tricky:
A compressor’s attack determines how quickly the full amount of that 2:1 ratio is applied!
If the audio signal goes over the threshold, and you have a long attack, the compressor does start to compress right away, but it will slowly work it’s way to 2:1.
When this audio signal goes below the threshold, so no more compression is happening, the release determines how long it takes for the sound’s volume to return to normal.
So why do you want to use compression, and how does it actually work?
That will give you an idea of how compression works, and why you’d want to use it.
But here’s the thing:
You can actually use compression in a few different ways:
- Keeping a more consistent volume to your instrument
- Molding or shaping a sound
- Sidechain compression for that EDM pump
Compression for Consistent Dynamics
A main goal of compression is to keep an audio level consistent.
You may hear people say that a certain element is getting lost in the mix – this could be a use for compression.
Let’s take a really dynamic piano piece – one that has loud notes and quiet notes.
These loud notes stand out great, but the quiet notes we cannot hear in the chorus with all the other instruments.
If we apply a bit more aggressive compression, we can make the loud notes similar volume to the quiet notes.
This does have pros and cons though:
- Your piano notes are now heard better
- The original emotion and natural sound of the piano playing is degraded (not in music quality, just in how it sounds)
Molding Sounds with Compression
Remember I said the attack and release knobs are really tricky with a compressor?
You can actually use these knobs to shape and mold a sound!
You must watch my tutorial on molding a sound with compression:
The longer you set your attack, the more initial transient you can let through.
This can be really powerful for a kick drum to emphasis the punch of the drum.
How to Sidechain in FL Studio – What is Sidechain Compression?
Sidechain compression uses one signal to control another signal.
Let’s say you have a pad sound playing one chord, and also one kick drum playing on beat.
You can route your kick drum into you pad’s mixer channel as a trigger.
On the pad sound, you’d open a compressor and set the kick drum as an input.
Whenever the kick drum plays, it lowers the volume of the pad. That’s how you get that EDM pump sound – it’s through sidechain compression.
I have a sidechain compression series if you’d like to learn more:
- Read: What is Side Chaining?
Series vs. Parallel Processing in FL Studio Mixer
There is more than one way to route audio.
That is series and parallel processing.
When you hear series, think of only one path for your audio to flow.
Any effects that you add on will effect the signal more and more.
When you hear parallel, think of many paths for audio to flow.
The benefits of working with parallel is you can use sends to use one effect for multiple instruments.
You can also dial in just the amount of effect that you’d like, or tap into a sound at certain points!
For more info on series and parallel, as well as sub groups:
Popular VST Mixing Plugins
I personally don’t use TONS of plugins.
They’re expensive and they tend to do the job similar.
However, there are some plugins which significantly improve workflow.
Stock plugins are great, and even to date, I still like to work with certain stock plugins that come with FL Studio.
But there are a reason third-party plugins exist:
More features and potential for creativity. (Increased workflow, too!)
If you want a full list of my favorite mixing plugins, you can view my favorite mixing plugins.
If there’s one plugin bundle you buy, I highly recommend FabFilter’s Pro Bundle:
- FabFilter’s Pro Bundle – This is an essential tool for myself. Fabfilter’s plugins have been an amazing increase to my productivity. This bundle is pricey, but Fabfilter creates premium. The workflow is worth it.
Advanced Mixing Techniques
Now that you are aware of the basics of volume, EQ, and compression, you can start dabbling with other tools.
- Complex Mixer Routing
Reverbs and delays will give tremendous emotion and depth to your music.
Distortion is actually a popular tool used in audio production.
You may think – use distortion on purpose?!
Distortion adds frequencies, which creates fullness.
The next time your drums aren’t standing out, try to apply a distortion plugin on them to see how much harder they hit, and more noticeable in your track they are.
You can see how I use these techniques by using sends in my fl studio template walk through video.
What about Automation in Mixing?
Automation is an incredibly powerful tool in music production.
Let’s say you want to have a high cut filter remove high frequencies for a song’s breakdown.
You can automate your EQ’s high cut filter to do this, and control how fast it sounds, too!
All this is shown in All About Filters.
Hopefully this fl studio mixing guide broke down a lot of basics for the starting producer.
If you have any tips on how to mix beats in fl studio, or of I’ve left anything out, please let me know!
All the tools and details I have suggested are my own personal recommendations after years of producing music.
If you have any questions, or would just like to say thanks, just do so in the comments! 🙂