About

Research-driven AI, commercially applied.

Strangeloop was founded in 2009 as a vehicle for AI research, consulting, and software development. We sit at the intersection of deep technical research and creative industry practice — building the systems ourselves, not just advising on them. Today that means a sustained focus on local, agentic AI for product development: systems you control, integrated into real products and workflows.

Selected research & work

2016 — present

MAGNet

Neural audio synthesis system designed and built from first principles. Real-time, pure C runtime with no ML framework dependencies. Deployed as an interactive installation with Massive Attack at the Barbican (2019), and the basis for a forthcoming commercial plugin suite for musicians and producers.

Read coverage →

2009 — present

Maximilian

A C++ audio synthesis and DSP library built as the real-time audio runtime underlying MAGNet and all subsequent neural synthesis research. Runs ML model outputs as audio without framework overhead — purpose-built so that neural audio systems can operate in live and embedded contexts. Widely adopted across the music technology industry and used in academic AI research programmes internationally.

View on GitHub →

AHRC — PI — £1M FEC

MIMIC

The first research-council-funded Creative AI project in the UK. Led as Principal Investigator with Google, Durham, and Sussex. Produced open-source tools for machine learning in music that are now standard in the field, including the RAPID-MIX library and the MIMIC web platform.

Visit platform →

c. 2005 — 2010

Christian Marclay — The Clock & Mabuse

Designed Mabuse — a generative, rule-based audiovisual composition system for live performance — used by Christian Marclay, Vernon Reid, and others across major international productions. Also technical consultant and software designer for The Clock (Golden Lion, Venice Biennale 2011). Pre-neural-network generative computing applied at the highest level of contemporary art practice.

About The Clock →

AHRC — PI — £308K FEC

Sound, Image and Brain

Principal Investigator on an AHRC-funded programme investigating computational models of audiovisual perception, including brain-computer interfaces, ML approaches to audio-visual synchronisation, and signal processing for accessibility. The research produced early BCI systems, sound visualisation tools for the deaf, and generative audio-visual engines — directly foundational to Strangeloop's neural synthesis work.

View on UKRI →

Wellcome Collection — Heart n Soul

Heart n Soul at The Hub

Lead academic on the Creative Computing Institute partnership, exploring how data could be collected and analysed using AI in collaboration with learning-disabled and autistic co-researchers. Heart n Soul's third residency at Wellcome Collection (2018–21), with co-research led by people with and without learning disabilities and autistic people — continuing remotely as the programme moved online.

View on Wellcome Collection →

Coursera / FutureLearn — 320,000+ enrolments

Creative AI Courses

Lead presenter on the first creative coding MOOC in the world — and the first MOOC by an English university — which enrolled over 250,000 learners globally (Goldsmiths / Coursera, 2013). That programme directly seeded a suite of Creative AI courses at UAL: Creative AI: Images and Media, Creative AI: Sound, Music and Interaction, and Creative AI: Text and Transformations, now live on Coursera, built on FutureLearn programmes that reached a further 70,000 learners.

View on Coursera →

Selected consulting engagements

Music & AI — XL Recordings

Speech Synthesis for Live Performance

Designed and built a bespoke AI speech synthesiser for XL Recordings, generating live vocal tracks for headline artist Koreless. The system was developed for use in live performance contexts where conventional recording was not viable.

Koreless →

Advisory — Mindset Music

Music Tech VC Fund

Official advisor to Mindset Music, the music technology investment fund from Mindset Ventures. The fund invests in early-stage startups across AI, sync, production, and music marketing, with a focus on tools for creators, rights holders, and the broader music industry.

Visit Mindset Music →

Biotech & ML — DeskGen

Genome Search System

Developed a prototype DNA search system capable of significantly increasing search speed across large genomic datasets, including the human genome. Delivered in collaboration with ex-Sony engineers, transferring knowledge from the games industry into professional gene-editing software.

About DeskGen →

Health Tech — UCL / NHS

Stroke Rehabilitation — Adaptive Sensor Systems

Designed sensor-driven hardware and software enabling stroke survivors to manage self-rehabilitation at home, using real-time signal processing and adaptive feedback to guide movement. Built in partnership with the Centre for Neurorehabilitation at UCL. The work was published in an award-winning paper at ACM CHI and directly connects to the broader BCI and computational perception research programme.

Read paper →

Health & social care — Heart n Soul / Health Foundation

You & Me

Led the research and technical programme for Heart n Soul's You & Me — Health Foundation funding, in partnership with the Royal Borough of Greenwich, continuing work from Believe In Us. Co-designed with learning-disabled and autistic people and social care staff; built and tested a bespoke LLaVA-based app with participant-driven fine-tuning so AI could support communication, understanding, and independence in care settings.

View project →

Creative Tech — Bronze Format

Generative Music Format

Created a cross-platform generative music format allowing artists to release music that produces a unique listen on every play, while preserving the essential character of the original work. Later adopted by artists including Sigur Rós, Jai Paul, and Arca. A direct commercial application of generative audio research to the music distribution problem.

Read more →

Sensor ML — Audiowings

Gesture Recognition for Wearable Audio

Designed gesture recognition and sensor-fusion systems for a wireless headphone device, enabling hands-free playlist navigation and web interaction through body movement. An applied ML project transferring embodied interaction and motion-classification research from academic settings into a consumer wearable product.

About Audiowings →
Photo

Mick Grierson

Founder & Director

Mick Grierson is a Professor and Research Leader at the University of the Arts London, where he co-founded the Creative Computing Institute. His research spans Creative AI, neural audio synthesis, and generative video, with works recognised as among the earliest globally in the field. For more than a decade he has worked with people with disabilities on AI and machine learning — in research, education, and co-designed projects spanning creative practice and health and social care.

He has led major industry partnerships with IRCAM, Google Magenta, Massive Attack, Aardman, and Ableton, and has contributed to national AI policy through AHRC, DSIT, and the Tony Blair Institute. He co-founded and chairs the Daphne Oram Trust and holds a PhD in cognitive film alongside a background as a practising musician.

His research has produced some of the first Creative AI artworks, the first CD-quality neural audio synthesis system, and the technique of network bending for creative manipulation of neural networks.

Strangeloop works with a network of specialist researchers, engineers, and domain experts. We assemble the right team for each engagement — drawing on deep relationships across academia, industry, and the arts.

Work with us

Whether you need advisory, education, or a technical partner.

Get in touch →