Scripting Audacity with JavaScript
22 February 2025It's always great to get back to the programmer‘s favourite hobby: Spending more time automating something than it would take to just manually do the thing. I’ve been looking into audio recently and going into deep dives on the open source tools for removing noise from audio such as RNNoise: a machine learning model developed by Mozilla foundation for removing everything but voice from audio. When I looked at some of the open source packages which use RNNoise I couldn’t find one that was easy to use over the command line but I found a way to use a audacity plug-in to run RNNoise. I started using it and the changes it could make were pretty awesome for something open source.
Before I go on, I should also specify that this is going to be a mainly Windows post. I haven’t tested the scripts with Mac or Linux, but if you are ready and willing to add config for Linux or Mac, hit me up on bluesky!
Unfortunately audacity doesn’t have a command line interface. However, audacity does provide a way to interact with it using named pipes. This can be an insecure way to interact between processes, so they warn you not to use this on a webserver.
Before you’re able to use audacity’s scripting interface you’ll have to enable audacity’s mod-script-pipe module. to do this you need to click into edit then preferences then modules then make sure mod-script-pipe is enabled, and finally, restart audacity. Audacity provides two different pipes: one for sending commands to audacity and another one for receiving responses after commands have run. When the scripting module is enabled, then as soon as audacity starts, both pipes are opened and available to write to/read from. Windows named pipes are also quite frustrating in that after one is closed, you’re reliant on the server to restart the pipe, which Audacity doesn’t seem to reliably do.
This makes it slightly annoying to create a nice function that will run a command then wait for the response to be printed off. However, here’s some code that will let you do that. As you can see we’ve sent a command to audacity to list commands and audacity has returned all the commands:
// Function to send commands to audacity:
async function sendCommandToAudacity(command) {
return new Promise((resolve, reject) => {
// Open the command pipe for writing
fs.open(commandPipePath, 'w', (err, commandFd) => {
if (err) {
return reject(`Error opening command pipe: ${err}`);
}
// Write the command to the command pipe
fs.write(commandFd, command, (err) => {
if (err) {
fs.close(commandFd, () => { }); // Close the file descriptor on error
return reject(`Error writing to command pipe: ${err}`);
}
// Close the command pipe after writing
fs.close(commandFd, async (err) => {
if (err) {
return reject(`Error closing command pipe: ${err}`);
}
const response = await readFromResponsePipe();
console.log(response)
resolve(response);
});
});
});
});
}
function readFromResponsePipe() {
return new Promise((resolve, reject) => {
// Open the response pipe for reading
let responseString = ''
fs.open(responsePipePath, 'r', (err, responseFd) => {
if (err) {
return reject(`Error opening response pipe: ${err}`);
}
// Read the response from the response pipe
let currentResponseTime = 0;
let lastTime = performance.now();
function readMoreRecursive() {
if (responseString.trim() !== '' || currentResponseTime >= responsePipeTimeOut) {
// If the response we get back is empty, close the pipe and resolve:
fs.close(responseFd, (err) => {
if (err) {
return reject(`Error closing response pipe: ${err}`);
}
// Resolve with the response data
resolve(responseString);
});
return;
}
const buffer = Buffer.alloc(4096);
fs.read(responseFd, buffer, 0, buffer.length, null, (err, bytesRead) => {
if (err) {
fs.close(responseFd, () => { });
return reject(`Error reading from response pipe: ${err}`);
}
responseString = buffer.toString('utf8', 0, bytesRead);
readMoreRecursive()
currentResponseTime = currentResponseTime + (performance.now() - lastTime);
lastTime = performance.now();
});
}
readMoreRecursive();
})
})
}
Great. How do we find more commands? To get a list of commands and the syntax to send them, you can find them in audacity macros. In Audacity, you can click into “Tools” then “Macro Manager” to look at macros. You can then create a new Macro and hit “New” then “Insert”.
From here you’ll have a list of different commands that audacity will expose. Many of them are options in the menus in audacity, along with a few other commands. After you’ve selected a few commands, you can save the macro, then export it. It will get exported as a text file which will show you the format you can use to send commands through the scripting pipe.
Macros are another way to automate editing audio files with audacity. Without the pipe module, you have to be in the audacity program to use macros, but this could make sense for your workflow: There are commands which will work for a generic use cases eg. noise reduction and truncating silence, but there will usually be custom edits you’ll have to make to specific parts of the file, so it can make sense to stay in the app.
Looking back at writing the JS scripts: I found that if you leave audacity open and rerun commands again, you’ll get errors because either audacity or node JS might not properly close the pipes (this also could be a skill issue on my part). Because of this, I wrote the script in a way that it will open an audio file, apply the commands, then exit audacity.
I’ve wrapped up the script into a class you can import and be able to use to start up audacity here - you can also find it on GitHub.
// AudacityConnector.mjs
import { spawn, exec } from 'child_process';
import * as fs from 'fs';
// Paths to the named pipes
const commandPipePath = '\\\\.\\pipe\\ToSrvPipe';
const responsePipePath = '\\\\.\\pipe\\FromSrvPipe'
const openAudacityTimeOut = 10_000;
const audacityStartUpTime = 5_000;
const leaveOpenAfterCommands = false;
var child;
const defaultOptions = {
audacityLocation: 'C:\\Program Files\\Audacity\\Audacity.exe',
commandTimeOut: 30_000
}
export class AudacityConnector {
constructor(options) {
this.options = { ...defaultOptions, ...options };
}
openAudacity() {
return new Promise((resolve, reject) => {
child = spawn(this.options.audacityLocation, [], {
detached: false,
stdio: ['ignore', 'ignore', 'ignore']
});
let currentResponseTime = 0;
let lastTime = performance.now();
let isFound = false
function pollForAudacity() {
setTimeout(() => {
if (isFound) {
return;
}
if (currentResponseTime > openAudacityTimeOut) {
console.log('Audacity did not open')
reject('Audacity did not open')
}
exec('tasklist', (err, stdout, stderr) => {
if (err) {
return reject(`Error executing tasklist: ${err}`);
}
if (stderr) {
return reject(`Error: ${stderr}`);
}
// Check if the output contains "audacity.exe"
if (stdout.toLowerCase().includes('audacity.exe')) {
isFound = true;
setTimeout(() => {
resolve()
}, audacityStartUpTime)
}
currentResponseTime = currentResponseTime + (performance.now() - lastTime);
lastTime = performance.now();
})
}, 500)
}
pollForAudacity();
})
}
closeAudacity() {
child.kill()
}
sendCommandToAudacity(command) {
return new Promise((resolve, reject) => {
// Open the command pipe for writing
fs.open(commandPipePath, 'w', (err, commandFd) => {
if (err) {
return reject(`Error opening command pipe: ${err}`);
}
// Write the command to the command pipe
fs.write(commandFd, command, (err) => {
if (err) {
fs.close(commandFd, () => { }); // Close the file descriptor on error
return reject(`Error writing to command pipe: ${err}`);
}
// Close the command pipe after writing
fs.close(commandFd, async (err) => {
if (err) {
return reject(`Error closing command pipe: ${err}`);
}
const response = await this.readFromResponsePipe();
console.log(response)
resolve(response);
});
});
});
});
}
readFromResponsePipe() {
return new Promise((resolve, reject) => {
// Open the response pipe for reading
let responseString = ''
fs.open(responsePipePath, 'r', (err, responseFd) => {
if (err) {
return reject(`Error opening response pipe: ${err}`);
}
function readMoreRecursive() {
if (responseString.trim() !== '') {
// If the response we get back is empty, close the pipe and resolve:
fs.close(responseFd, (err) => {
if (err) {
return reject(`Error closing response pipe: ${err}`);
}
// Resolve with the response data
resolve(responseString);
});
return;
}
const buffer = Buffer.alloc(4096);
fs.read(responseFd, buffer, 0, buffer.length, null, (err, bytesRead) => {
if (err) {
fs.close(responseFd, () => { });
return reject(`Error reading from response pipe: ${err}`);
}
responseString = buffer.toString('utf8', 0, bytesRead);
readMoreRecursive()
});
}
setTimeout(() => {
fs.close(responseFd, () => { });
resolve()
}, this.options.commandTimeOut)
readMoreRecursive();
})
})
}
}
Then you can consume the audacity connector in your code like this (this script takes in a command line argument ):
//script.js
const connector = new AudacityConnector();
await connector.openAudacity();
await connector.sendCommandToAudacity(`Import2: FileName=${import.meta.dirname}\\${process.argv[2]}`)
await connector.sendCommandToAudacity("Select: Track=0")
await connector.sendCommandToAudacity("TruncateSilence: Action=\"Truncate Detected Silence\" Compress=\"50\" Independent=\"0\" Minimum=\"1.0\" Threshold=\"-40\" Truncate=\"0.25\"")
await connector.sendCommandToAudacity("ClickRemoval:Threshold=\"200\" Width=\"20\"")
await connector.sendCommandToAudacity("LoudnessNormalization:DualMono=\"1\" LUFSLevel=\"-20\" NormalizeTo=\"0\" RMSLevel=\"-20\" StereoIndependent=\"0\"")
await connector.sendCommandToAudacity(`Export2:Filename="${import.meta.dirname}\\${process.argv[3]}" NumChannels="1"`)
connector.closeAudacity();
Then run it with:
node ./script.js myaudiofile.wav out.wav
If you want to try the RNNoise filter, then you can use the werman plugin for audacity. The results you can get are pretty amazing. You can make an iPhone recording sound like it was from a professional studio. To use it with the above script:
await connector.sendCommandToAudacity("RnnoiseSuppressionForVoice: Use_Preset=\"Default\"")
Here are other open source command line tools for audio processing you can check out if you want other ways to process your audio:
Back to blog