VJ Master Documentation

Jumpstart Your Reactive Project

So, you have bought the plugin, now what? I will show you how to install it and show you around the included examples. I have provided a lot of base classes that you can derive from so you don’t have to recreate the wheel.

Step 1: Getting Started With VJ Master

Estimated Completion Time:

10 minutes.

Installing

Familiarise yourself with the Unreal Docs on Working with Plugins.

If you are installing to a specific project; Place the whole ST_VJMaster directory inside the Plugins folder located in your projects root folder (where the .uproject lives). If you currently have no plugins installed, you may need to a plugins folder if one does not exist.

If you are installing to the engine; You can do this from your library on the Epic Launcher. Otherwise, place the whole ST_VJMaster directory inside the Plugins\Marketplace folder located at UE_5.X\Engine\Plugins\Marketplace.

If you installed outside of the Epic Launcher; add the following to your “Plugins” array in your .uproject

{
"Name": "ST_VJMaster",
"Enabled": true,
"MarketplaceURL": "https://www.fab.com/listings/ad7fc137-5d5f-4b9b-be25-b495ffd7efa8"
}

When you arrive into the editor. Navigate to ProjectSettings / ShaderTech.

Check each modules settings;

  1. ST Audio Control
  2. ST Audio Processing
  3. ST Audio Visualisation

The settings should be auto-populated with the base configurations stored in the plugin’s Config folder.

If it did auto-populated you will need to import them manually. Do so by choosing “Import…” on the top right.

Import the file “BaseST_VJMaster.ini” for all 3 modules’ settings.

Proceed to the next section where we can try an example.

Try An Example

Navigate to ST_VJMaster’s content folder in the Plugins section of the content browser.

If you can’t see the Plugins section in the Content Browser (CB). Click “Settings on the right of the CB and then “Show Plugin Content”.

Navigate to ST_VJMaster/ Content / AudioVisualisation / Maps / Niagara. Open M_ST_AudioVis_TestLab_Niagara_Blueprint.

Play In Editor.

The blueprint on the map will automatically start playing back a sine wave on your default output device.

You will see something as long as you have a standard audio setup. Don’t fret if you don’t, we will pass the hurdle later.

Stop the play session.

Proceed to the next section where we can change what the visual reacts to.

Change the Vis Source

Let’s change what the visual reacts to. There are different options for playback and capture.

The blueprint BP_ST_N_VisExample has a dropdown where you change the source of the visual. You will need to restart the session for the changes to take effect.

Capture/Playback (encoder/decoder) is referred to as “I/O” throughout the codebase.

Experiment with each I/O. By default, each I/O utilises your system’s default audio device for both playback and capture.

It is okay if you don’t hear (or see) anything as we will configure it in the next section.

Proceed to the next section where we can configure things.

Make It Your Own

Change the audio device in the audio processing configuration settings.

Navigate to Project Settings / ST Audio Processing. Here you will find the configuration settings for all I/Os.

Whichever I/O you chose in the last section, find the corresponding config.

You might have noticed that the Playback Capture settings do not contain Channels or Sample Format. This information is extracted from the audio file automatically.

To change the audio device that your chosen I/O operates on, we first must get a list of all possible devices on your computer.

You can either run

UST_AudioProcessingFunctionLibrary::GetPlaybackDeviceNames()
UST_AudioProcessingFunctionLibrary::GetCaptureDeviceNames()

or you can print to the output log by clicking the relevant option on the VJ Master tool bar menu.

The default DeviceName is

[System Default]

This will grab whatever device is set as a default on your system. You can change this by entered the name of the device that you received earlier.

P.S. Loopback Capture on Windows is limited to WASAPI so it does not support specific device names.

You will be prompted to restart the session or the editor. Do so.

After you restart you will see the visualiser dancing to the I/O that is operating on the audio device of your choice.

Mission Complete!

Step 2: Getting To Grips With VJ Master

Estimated Completion Time:

20 minutes.

Understanding I/Os

Continue experimenting with the playback and capture settings.

I/O’s are UEngineSubsystems.

They are initialised as soon as the editor or a packaged game starts up. They lie dormant until a sampler is registered against it.

The players and capturers should resemble some of the Miniaudio Examples. You can expand on them if you are brave enough. It’s not so hard, I did all the heavy lifting already.

Navigate to Project Settings / ST Audio Processing. Here you will find the configuration settings for all I/Os.

We will investigate the I/O’s FFTConfig in the next section. For now experiment with all the other settings.

P.S. As the I/O is an engine subsystem the object will persist outside of runtime. If a sampler is not correctly deregistered then the I/O might retain the project settings configuration from the previous session. If this happens, restart the editor.

Proceed to the next section where we will learn about FFT.

Understanding FFT

Learn what Fast Fourier Transform is and how to work with it.

Miniaudio provides us raw PCM audio samples; a stream of time domain values.

It is really noisy and nothing a normal human can really work with just yet. This is where FFT comes in. It makes it much more understandable by converting the time-domain into frequency-domain.

Miniaudio manages its audio thread internally. I collect the fixed buffer output and pass it onto our very own FFT thread built upon an FRunnable.

If you work only inside blueprints, you will never see the FFTThread, but it is huge and where all the magic happens.

An FFTThread is created for each I/O.

The FFTThread handles splitting the spectrum into bands as well as recording the overall amplitude and pitch.

Beat tracking is a WIP but a good place to get you started.

The FFTThread is created for an I/O after the first sampler is registered against it. The FFTThread is terminated when all samplers have been deregistered from it. 

All the relevant buffers are sent back to our I/O who then distributes them on the game thread to every registered sampler.

Proceed to the next section where we will learn how to configure the FFT.

Configure The FFT

Change how the audio is processed. This section should be a lot more fun.

Since the FFTThread is built in runtime, you can modify these settings without needing to restart the editor. Yay!

You will however need to terminate the session for the changes to take effect.

Most of the items here have tooltips that will guide you through what they do, although, I would like to think they are quite self-explanatory.

Proceed to the next section where we can configure things.

Subscribe To The Events

Creating your own sampler and listening to audio events.

Create any UObject or AActor and where you think is best register it against the I/O that you wish you sample.

To find an I/O there are many helpful functions inside

UST_AudioProcessingFunctionLibrary

Once you have got your I/O run RegisterSampler(). Dont forget to run DeregisterSampler() when you don’t need it anymore.

Provide your object the following interface;

IST_AudioProcessingInterface

This interface contains all the relevant audio functions. If you don’t need them all, don’t override them all.

Alternatively, subscribe directly to the dynamic multicast delegates on the I/O.

The buffers will arrive on the game thread and will include an identifier for the I/O as you can subscribe to more than one if you choose to.

Mission Complete!

Step 3: Working With VJ Master

Estimated Read Time:

5 minutes

Blueprints (and C++)

There are numerous interfaces and static functions to make use of.

Interfaces

Functions

Materials

I have provided easy-to-use material graph nodes for sampling the audio buffer.

Niagara

I created a lot of Niagara module scripts for my examples.

Examples

Take a look at the relevant maps in the plugin’s content folder. Here are a few.

BP_ST_N_VisExample showcases how to create a audio reactive niagara particle system in blueprints.

BP_ST_AudioSampler showcases how to create a blueprint actor that listens to audio events.

Step 4: Deep Dive Into VJ Master

Estimated Completion Time:

10 minutes

Audio Control

DMX, MIDI, OSC & Sockets

Navigate to Project Settings / ST Audio Control. Here you will find the configuration settings for the controllers.

The Audio Control Settings was created to control my examples. Some settings can be changed in runtime, others need a session restart.

When you create your own controllers, you are free to do what you want. I am a big project settings fan, but you don’t need to use them yourself.

Included are two simple-to-use base classes that provide basic functionality for:

  1. LED mapping a render target
  2. LED mapping a material

You can configure the

  1. color format; RGB, RGBA, BGRA
  2. alpha mapping; Log,  Linear, Gamma

Audio Processing

Audio Playback/Capture & FFT

Navigate to Project Settings / ST Audio Processing. Here you will find the configuration settings for all I/Os.

Possible Audio IOs is used to create the buffers for the relevant I/Os that we pass onto compute shaders and the niagara audio buffer. It should always be populated with a list of all the I/Os that you intend to use. It is required to generate the correct indexes for each I/O. I don’t like this implementation and I will remove it in the future.

The rest of the settings  you are already familiar with if you followed through all of the stages.

The Process Unchanged Buffer boolean when selected will perform the full analysis on the audio buffer even if no change has been detected. The likely scenario is when no audio is detected but you still want to react to it.

When false, no analysis will take place; saving you some performance.

The FFTPadding Factor will increase the FFT resolution but be careful with it, as it scales in power of 2 and you can quickly slow things down.

Audio Visualisation

Niagara, Materials, Compute Shaders

Navigate to Project Settings / ST Audio Visualisation. Here you will find the configuration settings for the visualisers.

Material Audio IOs is used to create the buffers for the relevant I/Os that we pass onto the material expressions.

The more I/Os you add, the bigger the buffers…

You will need to restart the editor for the change to come into effect, as the buffers only gets created once.

The Display configs are passed into Niagara through a data interface so that you can easily control some of the colours in my examples in runtime. You don’t need to use it if you don’t want to.

The Vis configs were created to control my examples. Some settings can be changed in runtime, others need a session restart.

When you create your own visualisers, you are free to do what you want. I am a big project settings fan, but you don’t need to use them yourself.

Step 4: Explore All The VJ Master Examples

Estimated Completion Time:

30 minutes

DMX Mat LED

AudioControl/AudioVisualisation

Shows how to send a reactive material over ArtNet.

OSC Mat LED

AudioControl/AudioVisualisation

Shows how to send a material over OSC.

Socket Mat LED

AudioControl/AudioVisualisation

Shows how to send a material over UDP/TCP.

Socket RT LED

AudioControl/AudioVisualisation

Shows how to capture the scene and send it over UDP/TCP.

BP Sampler

AudioProcessing

Shows how to set up a reactive blueprint.

Niagara Blueprint

AudioVisualisation

Shows how to send audio values to niagara through arrays.

MIDI Mat

AudioControl/AudioVisualisation

Shows how to control a material from MIDI input.

Niagara Waveforms

AudioVisualisation

Shows how to set up multiple niagara systems that react to the spectrum bands, amplitude, pitch and beats.

Niagara Waveforms Stereo

audioVisualisation
Seperate waveforms for left and right channels.

Niagara Buffers

AudioVisualisation

Shows how to pass the audio buffer into niagara using a data interface instead of arrays .

Niagara Trails

AudioVisualisation

Shows how to set up a trail-like effect using the spectrum bands in niagara .

Compute

AudioVisualisation

Shows how to set up a reactive boids visualisation using compute shaders and niagara.

Niagara Spectrogram

AudioVisualisation

Shows how to setup a spectrogram using niagara.

Dependencies

Niagara

Niagara effect systems.
Niagara is Unreal Engine’s next-generation VFX system. With Niagara, the technical artist has the ability to create additional functionality on their own, without the assistance of a programmer.

Niagara Plugin

Used in ST_AudioVisualisation module to provide examples of reactive audio in particle systems.

DMX Engine
Enables the core DMX engine functionality.
Functionality and assets for communication with DigitalMultiplexer (DMX) enabled devices.

DMX Engine Plugin

Used in ST_AudioControl module to provide DMX support.

DMX Protocol
Enables the DMX communication protocols.
Cross-platform support for both receiving and sending DMX data through sACN and Art-Net protocol variants.

DMX Protocol Plugin

Used in ST_AudioControl module to provide Art-Net support.

DMX Fixtures
DMX Light Fixtures Blueprints.
Provides example content of DMX-enabled light fixtures Blueprints you can use to spawn MVR.

DMX Fixtures Plugin

Used in AudioVisualisation blueprints to showcase a basic DMX example. I have no code dependencies here.

MIDI Device
Allows you to send and receive MIDI events through a simple API in either C++ or Blueprints.

The MIDI Device Support plugin adds the ability to send and receive MIDI (Musical Instrument Digital Interface) protocol messages. This is most frequently used to communicate between Unreal Engine and external hardware such as MIDI keyboards. However, because MIDI is a data protocol, a user can use data parsed from messages to drive non-audio related parameters, as well.

MIDI Plugin

Used in ST_AudioControl module to provide MIDI support.

OSC
Implements the OSC 1.0 specification, allowing users to send and receive OSC messages and bundles between remote clients or applications.

The OSC plugin provides an intuitive, type-safe Blueprint library that a developer can use to quickly iterate networked audio (and potentially other domain) data in Unreal Engine. When this plugin is enabled, you can send and receive OSC events through a simple API in either C++ or Blueprint. It supports sending and receiving messages and bundles, or combinations of bundles and messages

OSC Plugin

Used in ST_AudioControl module to provide OSC support.

Third-Party Dependencies

Miniaudio

A single file audio playback and capture library for C and C++ with no external dependencies. It supports all major desktop and mobile platforms.

You can find it inside Source/Thirdparty/MiniAudio.
I have made some small modifications to MA to allow raw Windows headers through the Unreal build tool.

The following was added to the build.cs to allow Miniaudio to select most efficient backend when compiling out for Android. Although, Android is not officially supported by VJ Master.

PublicDefinitions.Add("MA_NO_AAUDIO=1");

https://miniaud.io/

This software is available for the following licenses;

  1. Public Domain (www.unlicense.org)
  2. MIT (free for commercial and non-commercial use)

Versions:

  1. v0.11.21 (VJMaster 1.0)

Used in ST_AudioProcessing module to read from and write to connected audio devices.

KISS FFT

A small and efficient Fast Fourier Transform library written in portable C. It supports complex and real FFT operations and is easy to integrate.

KISS FFT comes bundled with the engine as it is used in the Smooth Tool. Unreal could drop Kiss in the future, in that case we would bundle it with this plugin.

if you have a module that depends on VJMaster or you want to work with KISS, you will need to add the following to your build.cs

if (Target.Platform == UnrealTargetPlatform.Win64 || Target.Platform == UnrealTargetPlatform.Mac || Target.Platform == UnrealTargetPlatform.Linux)
{
AddEngineThirdPartyPrivateStaticDependencies(Target, "Kiss_FFT");
}
else
{
PublicDefinitions.Add("WITH_KISSFFT=0");
}

https://github.com/mborgerding/kissfft

This software is available for the following licenses;

  1. BSD 3-clause (permissive and suitable for both open-source and commercial use)

Versions:

  1. v0.11.21 (VJMaster 1.0)

Used in ST_AudioProcessing module to make sense of the audio buffer.

Noise Shader

Noise Shader Library for Unity provides gradient noise functions written in HLSL. Most of these functions are ported from the webgl-noise library.

VJ Master has only imported 1 noise algorithim from the bundle.


https://github.com/keijiro/NoiseShader/tree/master

This software is available for the following licenses;

  1. MIT (free for commercial and non-commercial use)
  2. The original noise library is also valid under the MIT licence.

Versions:

  1. 2011-10-11 (VJMaster 1.0)

Used in ST_AudioVisualisation module to add noise to the compute shader example.

Frequently Asked Questions

How do I create a dependency on VJ Master?

Depending on your needs, add the below modules to your PublicDependencyModuleNames or PrivateDependencyModuleNames in your module’s Build.cs.

  • “ST_AudioProcessing”
  • “ST_AudioControl”
  • “ST_AudioVisualisation”

If you depend on ST_AudioProcessing then you must also include Kiss. You can do this by adding the following code to your module’s Build.cs. P.S. Take a look at ST_AudioVisualisation.Build.cs to see how it is done.

if (Target.Platform == UnrealTargetPlatform.Win64 || Target.Platform == UnrealTargetPlatform.Mac || Target.Platform == UnrealTargetPlatform.Linux)
{
   AddEngineThirdPartyPrivateStaticDependencies(Target, "Kiss_FFT");
}
else
{
   PublicDefinitions.Add("WITH_KISSFFT=0");
}
Why are the material expressions (or audio buffer) not working as expected?

The most likely scenario is that you have not defined all relevant I/Os in Project Settings / ST Audio Processing.

Material Expressions, the compute shader and the buffer version of the Niagara example rely on creating an index buffer at the beginning of the session.

Add all the I/Os that you intend to use to the PossibleAudioIOs multi-selector. You will be prompted to restart the editor. Do so.

This ensures that each I/O has the correct index. I am not a fan of the implementation here and I will upgrade it in the future.

Why are the examples not working for me?

The likely scenario is that you have either not imported the default project settings or overridden them with your own.

Return to Step 1: Getting Started With VJ Master to see how to reimport the default settings.

Can I use the included audio files in my project?

Yes.

All audio examples found in Plugin Root / ExampleAudio directory are Creative Commons 0 and taken from https://freesound.org/. This means that the sounds uploaded are free to use, even for commercial purposes, without any restriction.