Mechanical Midi Music. Next Gen Framework #143
Replies: 18 comments 2 replies
-
Overall it sounds great! I'll have to do a little more reading on MIDI 2.0, but it sounds like it might have enough options for arbitrary information that it should work fine to replace Moppy messages. First, at the risk of sounding a little self-promoting: Have you considered just adding MIDI 2.0 support to the existing Moppy project in a fork? It sounds like a fair bit of the device-code would need to be changed to decode MIDI2 instead of Moppy messages, but the current Controller and instrument code already does most of what you're looking to do, why reinvent the wheel? There are a couple bits that would make forking more work (mostly C++ instead of Java and moving the distribution to the devices), but I have some thoughts on those below. Obviously, there's nothing quite like building from scratch, so have at it if that's what you want to go for. Just wanted to make sure you'd at least considered reusing the code that already exists. Some additional thoughts:
Just my initial thoughts, take them or leave them :) |
Beta Was this translation helpful? Give feedback.
-
I guess I was mostly thinking about making sure there's room for non-standard features like, "turn red lights on for drive 2," or, "set drive 8 to stationary mode." But then I just remembered that actually I ended up using System Exclusive MIDI messages for those anyway so I could sync them with the music. So even MIDI 1 is fine.
Okay, I think I understand now more what you're going for; though it's not that different from existing Moppy. In the current design, the Controller just reads MIDI files and maps/distributes events to be sent out to different devices. The work of converting the events into music and dealing with instrument specifics is the job of the devices. If I'm understanding correctly, the biggest difference is that you're planning on sending all the MIDI events to all the devices and letting the devices sort out which events they want to handle (rather than the controller doing the distribution work). As long as the Controller can still easily configure all the devices (so you don't have to touch a bunch of different separate configurations to make changes) I don't see a big downside. Feels like there may be a greater chance of configuration mistakes where multiple devices accidentally grab the same events, but that's a bit subjective.
Yeah, for UDP it's less a bandwidth consideration as it is not having to deal with managing connections, or resending dropped packets (though that happens fast enough to maybe not matter). If the host dies and restarts, the clients don't really need to reconnect with UDP (at least with multicast) since they won't know anything happened. Moppy doesn't constantly send a stream of notes, so there is always a risk that a Note Off event would get lost and a note would stick on. The failsafe you mentioned is a really good idea-- should probably add that to Moppy--, but in practice it a) doesn't really happen often enough to be an issue unless I kill something while it's running and b) is usually fixed by the next incoming note.
There's a lot of pretty good cross-platform stuff out there, so like I said I don't think Java is a must as long as there's some good compatibility.
Unless I'm missing something, the user only has access to these features without the core controller if they're willing to.... manually craft and send out packets though, right? Practically speaking everyone's still going to use the controller to do all the configuring. Which is fine, because I think that will be easier, but mean it may not be worth spending too much effort on making the devices work without a controller if that's not something users want. (Or maybe I've misunderstood something)
This is definitely a good way to do things, and is more or less how Moppy is currently set up. Pushing instrument/hardware specifics behind a nice interface makes the controller and messaging protocol nice and instrument-agnostic. Overall I think it seems like a sound design (if pretty similar to the current Moppy). There's certainly a lot of to be gained on the C++ side of things since it's far from my primary language, and there's some features you're talking about that would be really welcome (especially having more than 16 MIDI channels) that would be hard to integrate into the existing code. Also moving everything to MIDI instead of a custom protocol is a better solution for ESP devices, which, as you said, are really cheap and available these days. Arduinos might still struggle with noisy MIDI files, but that's probably not a huge loss. As I said, I'm excited to see what you end up with, and I'm always happy to answer any questions I can to help. If I had the time, I'd love to do some testing or contribute code too, but the drives are packed away somewhere and I still haven't set up my workbench after moving back in January :P. |
Beta Was this translation helpful? Give feedback.
-
We are also wrapping all the MIDI msg in UDP so instead of multicast we can just send the relevant MIDI Ch to each device individually or multicast the MIDI music msg but send configuration settings to the corresponding devices. This can quickly be changed later with MIDI 2.0 with the expanded channels so that the same Ch cannot be used between devices but a device can still take up multiple channels (Just a cleaner solution more true to the MIDI protocol).
You are correct but we were also thinking of also adding USB or DIN midi support. This allows instruments to be played without a computer and simply connect the devices directly to a keyboard or synth pad. You can also use the device's default settings, set up the device with a computer beforehand, or control it directly from Musial editing software such as FL studio or in my case Mixcraft. You can also add MIDI mixers inline between the controller and device to add pitch bend or volume changes for performance which is why adhering to the midi protocol is so important. (UDP would not support this in MIDI 1.0 only DIN/USB are specified but it should all be good in MIDI 2.0) |
Beta Was this translation helpful? Give feedback.
-
If you're planning to support both direct messages and multicast, the devices will need to have the ability to filter out messages anyway. But supporting both also opens the door to misconfigurations where nothing happens (e.g. the device is configured to be "device 3" but the controller is configured to send notes for "device 2" to that address so the device just filters everything out). Support for multiple strategies and flexibility is good, but just make sure you're keeping an eye out for potential headaches it might introduce. To be clear: I'm not saying what you've proposed is bad or broken or anything, just want to make sure you've noticed the potential issues and decided it's still worth it 🙂 .
This is really cool, and definitely an advantage of using MIDI instead of a custom protocol. It still sounds like you'll need to have some kind of controller for configuration, but being able to use the defaults means you don't NEED a controller in all cases. |
Beta Was this translation helpful? Give feedback.
-
Just a quick update. I have been busy with College, my Tesla Coil, and Pneumatic Solenoids #153 so not much work has been done on the new framework. I am switching to the new release of .NET 6 C#. .NET 6 allows cross-platform development across Win, Linux, Mac, as well as allowing applications to run on the web. This will give a great backbone and allow for an extremely flexible GUI. I'm still learning but I hope to get a good deal of work done over the upcoming winter break. .net 6 also allows you to run applications and change functions on the fly without having to recompile which is kind of sick. |
Beta Was this translation helpful? Give feedback.
-
Funny you ask for an update because I was going to post one in the next few days. I haven't made as much progress as I would like with college and my other instrument projects. Over winter break I started by testing Microsofts .Net 6 Maui framework which is cross-compatible across Mac, Linux, Windows, Apple, and Android. I also used Blazor which would allow the application to be run on any web browser. I designed the whole front end and some basic logic but as Maui is still in Preview and so new I struggled to figure out some of the basic features and there is minimal documentation and example code. Eventually, I will try to go back to Maui as the Idea of a fully cross-platform controller that works in the web browser is very appealing even if it might be a limited version with only playlist control and message routing. (See Implementation Below) After trying Maui I switched to WPF which is Windows oriented but will at least allow me to design the controller in a well-supported environment. In the meantime, I also have been working on various other Instruments and projects for my personal orchestra. I finished both my Large Tesla coil (which can reach 4ft arcs) and Air compressor instrument though I still plan on remaking both out of smoked acrylic and lighting them with Addressable LEDs synced to the music they are playing. I just finished adding 300 of these addressable LEDs to my 5ft wingspan cargo plane and plan on displaying and recording the show for the 4th of July. My current instrument project I'm working on is converting a Hammered Dulcimer to be self-playing. I was able to get a little funding for the project from my college under a student creative idea fund but it still has a bit of work to be done before I'm ready to show anything. I will update my completed instruments in the Instrument discussion thread and add more details as they appear. As for WPF, I started working on the application a week and a half ago. I'm making slow progress as I'm entirely self-taught and learning a new framework takes time. At this moment I have the UDP server written and have successfully sent Ping and Pong packets as well as other data between the server and an ESP32. I have currently been testing different Midi libraries and so far have been able to rip and stream the midi packet commands from a midi file I just need to hook them to my UDP server. I still have to reconstruct the entire packet handling on the ESP32 client-side and design the entire GUI and media controller server-side. Basically everything... I've spent a good amount of time dissecting Moppy2 and some features are heavily "Inspired" by it mostly client-side but also some of the architecture Server side. For the UDP server, I am using Multicast similar to Moppy except all the packets are raw midi packets sent over a network with the exception of the Ping/Discover msg. I might try to use a sys exclusive msg for this purpose but for now, it's just a simple "Ping" sent as a UTF-8 encoded string for testing. When a ping command is sent to the multicast group each device responds with a Unicast "Pong" back to the Ip address of the Pings source address (The server). My current plan is to replace the pong with all the data relating to the current client device's state such as Name, ID, Instrument Type, Note Min/Max, Supported controls (Pitchbend, Aftertouch), and the configuration of the client distributors). The distributors are similar to Moppys Mappers where the distributor's route which channels are accepted by the client, how the channels are routed (Round robbin, Stright through), and each channels group of notes (ex. floppy drives 4-8). I might have a separate ping packet and device update packet though. When the server receives a pong from a new client the address and associated data are added to an internal database/array something. This list then will be displayed on the right-hand side of the GUI as connected devices. The user can then reconfigure the attributes and distributors of the devices in the GUI which will send a unicast packet to the device's associated IP with the updated parameters. The Esp32 saves the config to its filesystem. This creates a Multicast channel where the only data being sent is raw midi packets and all configuration of devices is done unicast directly to each client. (I have the Multicast and Unicast working together currently just none of the handling logic) Due to the fact that all the Multicast packets are raw midi, and each device handles its own distribution of notes it greatly simplifies the communication protocol. This basically separates the Server into two parts a Midi player and a Configuration tool and each can be used without the other. For example, a raspberry pi could easily be used as a server without an interface or configuration tool for a school display. Each device saves its own config so you set up the devices beforehand and then all the pi has to do is read a playlist of midi files with an existing library, get the midi data, and send it to a multicast group. I started with wifi coms but I plan to add serial and din as the project matures. If a user is using Din they can also add line midi controllers for live performance without the need of a server at all. The devices act as plain midi instruments with config still handled by the server application. Plus you can mix and match with some devices playing backing tracks while performers control other instruments. I still have a lot of work to do but I should have a bit of free time this summer where I can hopefully start ticking things off but as I said I'm learning as I'm going so it's taking a bit of time along with all my other projects and schooling going on. Also here is my youtube channel https://www.youtube.com/c/DJthefirst |
Beta Was this translation helpful? Give feedback.
-
Also Sammy how long did it take you to write moppy V1 and what kind of delay did you get using UDP for esp32? I'm getting 1/2 to 1/10 of a second using the same Libraries. |
Beta Was this translation helpful? Give feedback.
-
Uh, I guess it depends on where you want to draw the "V1" line. I hacked some Java to get a floppy drive playing over a parallel port in an evening; the first "MoppyAdvanced" commit on GitHub was like 7 months later and there was some pretty consistent commits for another 3 months-ish before things slowed down for a while. If you're asking to compare to your project, I'd add a couple things:
I don't think one strategy is necessarily better than the other, but don't get discouraged if you aren't seeing results right off the bat as long as you're still putting in productive work somewhere.
I have to admit I don't actually know. 😛 I built it, hooked up an esp32, hit the "play" button, and it sounded fine so I didn't look anymore into it. How are you measuring the 100-500ms delay? That seems high for a local network ping. It's possible I actually do get that much delay but it's consistent across all the packets so the songs still sound fine, but it would have to be consistent (like always e.g. 450ms); if it's fluctuating between 100ms and 500ms, some notes that should be playing at the same time will be up to 400ms apart which would probably be noticeable. |
Beta Was this translation helpful? Give feedback.
-
Time for another project update. First to respond to your previous comment. I could quickly tell there was around 1/2 a second delay just from the backing track on my computer and PWM buzzer on my esp32. To get numerical results I used Wire Shark and just sent a ping/pong and measured the delta. which is where I got the 100-500ms readings. I also looked at Wireshark send to Serial back to eliminate esp32 sending delay but I was getting around the same results. I suspect it is something to do with the Arduino libraries but ill save reworking IP libraries for another day. For now, I added Serial networking support and am using Hairless MIdi and my Midi editor to communicate with my ESP32. I haven't continued working on the WPF server until I finish the client code. I am also taking Java this semester though I've used Java before and am planning to stick with C# for the final version of the server, It might make sense to write a quick and dirty java implementation. Now for the Client Project Update. The majority of the Framework has all been written out but various methods have not been written and a majority of the features have not been tested. For example, all testing that I have done has only been using one distributor and straight through on Midi Channel 1 so other methods such as RoundRobin, RoundRobinBalance, Accending, Descending, and Stack are written and implemented but not tested yet. Similarly, the distributors have all sorts of configurations but I have only used one basic config, and a lot of settings like Omni mode and monophonic vs polyphonic have not been implemented. That all being said what I have gotten working is using an esp32 device as a single PWM instrument (Example instrument). I only have one buzzer at the moment but everything is written so that I could play up to 15 instruments on the esp32 polyphonically (limited by the number of available pins). I have been able to play some pretty fast songs in time with pitch bending and polyphonic notes though I have one or two small bugs to work on with polyphonic note playing. The instrument class structure supports up to 32 instruments with up to 16 notes per instrument being played simultaneously. This might not make sense for a PWM buzzer but when using an esp32 with two shift register instruments like a piano or xylophone, it might make sense to have 16 simultaneous notes. In theory, this allows for 512 simultaneous notes but that depends on the instrument implementation and number of instruments, amount of pitch-bending, etc. I haven't done any sort of load testing yet but the flexibility like I said is more for 32 1-2 note playing instruments or 1-2 16 note playing instruments. If you want more notes you could also always combine two instruments into one in the instrument class but as the most notes I've seen played at once in a standard piano tutorial is 10 so I'm not going to worry about that. The wide reconfigurability of the instrument class allows you to push your esp32 until it crashes allowing you to get full use of your esp32 before you need a new device to support more Instruments. There is still a lot of optimizing I need to do for the overhead and timings but I haven't run into problems yet. I have implemented some of the basic CC (continuous controllers) such as All Notes Off, Reset, and Mute. I have built support for the others into the framework and added Omni mode and Mono mode but haven't finished the methods. The Midi editor I use Mixcraft sends All Notes Off msg instead of addressing the currently playing notes which led to notes hanging when the player was stopped. This problem also exists in Moppy I believe though I haven't had time to look into it too much. I might make an Issue or a pull request though I have not looked too deeply into the Java component of Moppy yet. A new feature I might look into adding to increase the number of channels is note offsets. Often times most midi tracks stay within 2-3 octaves at most 4-5. Midi supports 10 octaves of range and most instruments can not even play more than a few octaves. Moppy only supports 7. You could configure the distributors to only accept a certain range of notes offset a midi track by x octaves and then stream a midi channel containing 2-3 tracks. The distributors would then splice the channel back into tracks by octave and reverse the offset. I was planning on implementing note limiters per distributor anyways as some instruments can play only certain ranges but the octave offset would allow for even more flexibility. Everything has been rewritten from scratch except for parts of the UDP class which I am no longer using and the instrument Timer. The instrument classes use a similar structure to Moppy for the Tick though the Tick function is rewritten, Instrument Timing constants, and pitch-bending tho each is slightly different. Once I finish perfecting the Example Class PWM instrument I plan on porting over some of the preexisting instruments from moppy and adding a few more new ones to the project. The project is still a work in progress and many parts are missing but I look forward to any feedback you are willing to give. I still need to write documentation though this thread probably serves as a good source of information for now lol. I am planning to have some of my professors at college look through the code and roast me so I can improve upon the core structure. It's a fun challenge trying to optimize all the loops to run as fast as possible so multiple notes can play on each instrument at a time. Mechanical Midi Music I Documented the code pretty well and everything is named straightforward though there are probably a few spelling mistakes sprinkled around. 😉 (Ignore WPF Server for now) |
Beta Was this translation helpful? Give feedback.
-
*I fixed polyphonics just forgot to clear a variable when I added pitch bend support. Also is there a story behind why in the description of the repository you have it capitalized "Musical flOPPY controller"? |
Beta Was this translation helpful? Give feedback.
-
I am back with updates until Sammy Tells me to stop... Currently, I have gotten the framework working with a few different instruments. My Air Compressor/PWM drive, Shift Register, and Stepper motors. No floppy drives yet 😔. The framework currently only operates on the Esp32 for now though adding Arduino Mega and Uno timing libraries is high on my list. The framework is stable with only a few bugs that exist due to the large amount of user control around instrument pools. Ex. If you have two different overlapping instrument pools of different sizes mirroring the same midi channel. Very much an edge case. The User Interface is coming along very nicely. After switching between 4 different languages I have a working model running in Chrome! apparently in 2021 Serail port access was added to Chrome, Edge, and Opera. This means any device running standard Chrome can use the program. No download nessary. For now, I am only doing config through my chrome app but I'm adding midi as the next thing I do. At the moment I just use the ever-popular hairless-MidiSerial and loop Midi to communicate from my Midi editor to connected Instruments. |
Beta Was this translation helpful? Give feedback.
-
Nah, keep em coming! It's definitely a related project, so maybe someone will find yours suits their needs more. Using Chrome is a surprising twist! I was really excited the last time I went to flash some code onto an ESP8266 and found out I could flash it directly from the project page, so there's a lot of potential there. What language is actually doing the serial interface in the browser? I see you're using WPF (🤮 , no offense 😜 ), but are you writing JavaScript or something to do the actual controlling? I also somehow have overlooked your previous update, so sorry I'm just now getting to reading it.
It's certainly possible. I know Moppy will handles Sequence Stop events by turning off all currently playing notes, but I guess if Mixcraft doesn't send out a sequence stop event (is it technically starting a sequence even? I don't know...) it may hang. I definitely had issues where sometime instead of a Note Off event, I would get a Note On event with
Ooo, that's a clever way to do that. That's actually feasible in Moppy as currently written using the mapping scripts once you've created appropriate MIDI files. I'll have to remember that.
Man, if I can find some time I would love to read through it and do a little code review. As evidenced by it taking me ... 3 months to read your last comment, things are a little busy, but I'm hoping this winter to get back into a couple coding projects so I'll at least be around code more.
Just attempting to emphasize the origin of the portmanteau. :P Sounds like things are coming along pretty well. The Chrome thing sounds exciting because it might make it even easier for people to get setup and running quickly. Actually, if you managed to get the instrument code uploadable from a website (like WLED) people could run the whole thing without leaving their browser! |
Beta Was this translation helpful? Give feedback.
-
Good to hear from you again and with such a hasty response too!
Funny you mention WPF. The WPF code has been hanging around unused for the past few months about 10 minutes after I made my last post the code was purged from my repo and replaced with the HTML Chrome page I've been working on. Only for a few days though so it is still quite crude. I'm still keeping the WPF server on my drive for safekeeping. My original goal was to use UDP as the primary protocol and have Serial as the secondary protocol as my project is focused on large scalability. The problem is the Arduino UDP libraries are horrendously slow from what I tested and I want all the code on a common framework even if the only Arduino component I am using is for the interrupt timers and Serial Com the rest is raw c++. The problem is browsers still do not readily support UDP access which leaves Java or WFP. Though I haven't looked at node js or anything yet. Also, .Net is not as nearly as mature as I thought it was at the time for cross-platform support.
Yes, the page is just HTML, JavaScript, and CSS. All the serial support is integrated directly into JavaScript. I think it was only 3-4 lines to get a reader and writer object you could just pass arrays to. The code is a hot mess currently as I've only spent a few days on it between classes and Hw.
Because Mixcraft is a DAW and not just playing pre-recorded tracks it doesn't have a sequence start and end events. It does send MIDI Control Change messages but when you pause playback instead of sending a note-off event for every on-note or Reset you get a Midi ControlChange AllNotesOff message which I Assume moppy does not implement. I had to write a secondary switch tree to catch the message as it's not one of the more basic Midi Messages. I think it's 0xB07B.
If you do look through the code just look at the Micro Controller C++. I spent a good amount of time building it from scratch I think only the Network UDP and the Timing code were somewhat copied from moppy everything else should be from the ground up. The HTML and JS are absolute trash I have no idea how people scale projects in JS. I've also only done JS for like 3-4 weeks. My plan was to get it working then have someone on Fiver redo all the CSS real nice for like 25$ lol. I am also adding the serial com code atm so there are weird comments and potential unfinished code in places.
I had to look up what portmanteau meant but now I see it and think it's Brilliant!
It would be cool but I think one of the big draws for Moppy is that it can run on any Arduino you have lying around. I might look into it but It would take a lot of work to support so many different devices other than Esp32. I do want to precompile binaries though for each platform to make Users' lives easier. |
Beta Was this translation helpful? Give feedback.
-
A lot of work has been done but this is just a relatively small update. Everything works all you need to do is plug in a flashed device select the COM port and midi input and you're off to the races. Further Instrument and device support are still to be added. For a brief project Description and how it works, I added a write-up on the Wiki. The device code is pretty much fully complete other than adding a device save state, the addition of additional supported instruments, and minor code refactoring. I designed in support of other Arduino platforms but for the moment it is untested and I need to optimize Imports as the program hogs too much memory. I fully designed a message protocol to communicate back to the server with SysEx messages. Everything is still Midi so you can just use a Midi to Serail converter like hairless midi. Most of the work has been put into the GUI which is fully functional as a Midi Bridge and configurator but needs an overhaul of the visuals, the tracking of connected devices configurations, and the addition of a midi file player. I spent most of the time learning JS and working on the backend design. I added autodetection of Connecting and Disconnecting Midi ports. You can select Multiple Midi inputs, Multiple Midi Passthroughs, and the addition/removal of connected serial devices to send Midi Messages. I still need to add a built-in midi file reader but you can use a DAW with loop Midi to send midi to the GUI. |
Beta Was this translation helpful? Give feedback.
-
Over half a year and time for another update. I'll start with the biggest change New GUI! Features!
Suggestions are welcome I still plan to add some features such as mute, the play head is a bit buggy and song names cut off, but an otherwise smooth experience. I am not fully ready for release yet as I would like to clean up the Device side code a bit more. Currently, the device resets to its initial config on disconnect but I plan to add a memory partition to fix that. I would also like to get a few cross-platform devices before release. I do plan on making a video about how to use the GUI and some more about my instruments so I will post them when ready. Another cool feature I would like to add is ESP Web Tools. With it you could select your configuration and the tool would flash your device completely from the web. Literally, all a user would need to do is plug in any esp32 click flash and configure. Crazy.. A bunch of back-end work also went on protocol-wise to get the GUI connection working. I can explain the protocol more in detail I might even make a deep dive video if people are interested. Also Happy 2 years of this initial discussion! / Start of this project offshoot from moppy. |
Beta Was this translation helpful? Give feedback.
-
Well as always, I'm slow to finally get around to reading and responding to these. It looks awesome! I still don't know when I'll have time again to get back into building instruments and such, but I'm pretty tempted at this point to just use what you're working on instead of Moppy 😝. Love that the interface is just in a browser, and with a little bit of ESP Web Tools there's basically zero software dependencies!
|
Beta Was this translation helpful? Give feedback.
-
If you look at the photo you can see there are multiple distributors and the one highlighted blue is the current distributor. Add will create a new distributor selecting a distributor will set the manager to the selected distributor settings. The update button will change the settings of the selected distributor. Remove/Clear removes the selected distributor and removes all distributors respectively. I do plan on making a video eventually on how to use it maybe after winter break and finals.
The GUI is HTML/CSS/JS. It is relatively easy to change but to get different page layouts at different window sizes it is a bit conjuguered together.
I will see what I can do I need to rework the vertical mode anyway as parts get out of order. I would combine the managers and columns but it pushes the lists too far down in wide screen. Currently, I am trying to add quality-of-life features. High notes sound awful on my pneumatic instrument as I am using 40us for the interrupt. disabling features like local storage and the LED controller got me down to 7-8us which is pretty okay for songs like Nyan Cat. I am currently looking at how I could make the interrupt dynamic or pause it when a config message is received. This requires rewriting some of the microcontroller core though. I am not quite happy with the system yet but it is certainly usable. I am also trying to add some easy-to-use flags to quickly disable portions of the program and make it easy to use. For example, you could set a default config on boot instead of using local storage or you could disable config entirely and hard code your distributor config. |
Beta Was this translation helpful? Give feedback.
-
Ah, okay, I get it now. A video would help, but I'm sticking with my modal dialog suggestion as more intuitive (but it's just a suggestion, and just because it makes sense to me doesn't mean it'll make sense to anyone else ;P)
Agree that lists should have their own columns to avoid scrolling :) Regardless, I hope I find time to give it a try someday! |
Beta Was this translation helpful? Give feedback.
-
First off, Love this new discussion page. Second Its been a while since my last inquiries in issue #100 Just to summarize I was looking for a way to connect >16 independent midi channels to Moppy. Well between College classes and COVID I was unable to begin on the project. I decided to redesign the project and start from scratch for a few reasons first the MIDI 2 protocol was announced and I've also been able to get a few fellow programmers to join on the project at college.
The core of the new program will use the midi protocol as its core as opposed to custom messages such as Moppy messages. The first version will use Midi 1.0 but we will later write a library implementation for midi 2.0. Midi 2.0 brings 16 groups each with 16 channels allowing 256 independent MIDI channels and also increasing resolution from 7 bit to 16-bit resolution and more. This should also allow all Instruments to work independently from the core controller software. It will also allow Midi launch pads, keyboards, and synths to work directly with instruments or work with the core controller to control instruments. ex volume or pitch bend. Midi 1.0 has Din and USB midi which is just Din messages in a USB wrapping so we would make custom ethernet packet wrapping. In MIDI 2.0 this is fixed with the introduction of universal packets sent over any medium.
Another difference is that the core controller will be written in C++ as opposed to java allowing it to run more efficiently though at a cost to cross-compatibility. The core controller ingests files or an external midi stream and then allows you to select which tracks are routed to which instruments. It also allows you to configure instruments. If an instrument is not connected to the core controller and only uses midi input it uses a default configuration.
The Devices will be initially Esp32s and later ported to other Arduino devices. Esp32s being 5$ a piece and higher speed than Arduinos. Esp32 allows us to send all the messages over a network instead of USB or DIN allowing greater scalability when using a large number of instruments. The code for the instruments starts by reading in midi through Din/Serial/Ethernet. It locks on to the first communication type but if no messages have been read with a certain time frame it starts actively scanning all communication lanes. After we get the message we decode it to note/pitch bend etc. It then passes through a filter that removes all incompatible messages. This filter is device-specific and can set the range of notes the instrument can play as well as whether it can pitch bend or not, etc. The remaining data is then passed to a distributor that dictates which instruments on the device play what notes. It implements Round robin and straight-through settings locally. Each distributor is assigned a channel allowing one device to play multiple channels on its instruments.
Example You send from the Core Controller channels 3, 4 to Device 2. Device 2 then sends channel 3 notes to distributor 3 and 4 to distributor 4. Device 2 has 6 Floppy drive instruments on distributor 3 and 2 instruments on distributor 4. The result is Ch3 plays multiple Round Robin notes on 4 drives while Ch 4 plays the same notes for volume on the remaining 2 drives. All these settings can be configured in the Core Controller or distributors dynamically / hardcoded default creating distributors and allocating instruments. I know Confusing....
TLDR Devices being ESP32, Midi Messages are Filtered, Distributors are assigned per MIDI Ch so Instruments on the same device can be split between channels
The instruments we plan to eventually support are Pneumatic High-Speed Solenoids, Electromechanical solenoids for drums, Floppy drives, Stepper motors, Flyback Transformers, Midi Passthrough, and someone wanted to retrofit slide whistles but I will leave that up to them.
I am curious about any feedback or criticism you might have for such a system. It is still a rough outline but we are starting to get TCP communication and midi transition setup working for initial testing.
Beta Was this translation helpful? Give feedback.
All reactions