The Musi-co methodology,

an AI tool for human adventurous composers and metacomposers

The "Handmade" Engine

Beyond Stochastic Parrots. Built by Composers and AI Specialists united for the New Creators

How it works

Adaptive

Musi-co’s engines can be guided by and react in real time to unlimited input strategies.

Automatic & unlimited

Musi-co is fully algorithmically generative, and can create endless streams of music.

Rights free

The music that Musico writes is copyright-free, and not referable to existing music.

HIGHLIGHTS

Patented Cross-Ontology Mapping Standard AI guesses. Musi-co knows.

Our patented technology bridges the gap between different data domains—linking visual narratives (video/gaming) to musical theory (rhythm, harmony, texture). 

This Media2Music pipeline ensures that the music evolves with your dramaturgy, not just behind it.

Big archives can be turned into short media feeds for social media, without any additional IP burden

Integrations

Musico in your technology

The MIDI Advantage We don’t just spit out flat audio files. We generate high-fidelity MIDI data.
 
Total Control: Swap instruments, tweak velocities, and rearrange notes. Integration: Drops directly into your DAW of choice. 
 
Resolution: Capture the nuance of human performance that “black-box” audio models miss.
 
 

Artisanal Data
We believe in Small Data, Deep Intelligence.

By using curated datasets from master composers, our AI learns the intent behind the notes,
resulting in music that feels intentional,
not incidental



A brief history of MUSI-CO's Ai adventure


The musi-co / Alessandro Tibo/Paolo Frasconi research arc —


From early rhythm generation to CONLON to diffusion MIDI:

The work began in 2017 with early explorations of generative models for EDM drum patterns, establishing the core idea of learning structured latent spaces from which novel musical material could be sampled and interpolated.

The second paper formalised this into a VAE/GAN-based system that generates drum patterns and enables smooth interpolations between genres — embedding a DJ-like transitional intelligence into a generative model, evaluated by real EDM practitioners.

The CONLON paper then substantially advanced the framework on three fronts simultaneously: a new lossless pianoroll-like data description storing velocities and durations in separate channels; Wasserstein autoencoders as the generative backbone (less prone to blurriness than VAEs); and a generation strategy that computes optimal trajectories in latent space as a widest-path problem, preventing abrupt transitions between consecutive patterns.

Crucially, CONLON also introduced small, musician-curated training datasets — ASF-4 (Acid Jazz/Soul/Funk) and HP-10 (High Pop) — opening the door to personalized generators that individual musicians could compose or curate themselves. unifi

The new paper (April 2026) about to be published with Frasconi and Tibo as leading scientists replaces the autoencoder latent-space interpolation paradigm with diffusion models — a shift from trajectory-in-latent-space to iterative denoising in the data space, which offers richer modelling of the full MIDI distribution, better handling of polyphonic multi-instrument structure, and more controllable conditioning (style, genre, harmonic context). This is the natural next step: CONLON solved the data representation and latent traversal problems; diffusion solves the generative fidelity and increasing the music generation quality and diversity.

Technology
Explore
LEArn more
impro_beat

Impro BEAT

Preliminary release • Free to use

VST Plugin • Mac

Bare-bones UI version!

No implied license: upon downloading Impro BEAT, Musico grants and you receive no license under any Musico Intellectual Property.

impro_vst

Impro

Preliminary release • Free to use

VST Plugin • Mac

Bare-bones UI version!

No implied license: upon downloading Impro BEAT, Musico grants and you receive no license under any Musico Intellectual Property.