Backing Store You will need to have a database for the backend. I would suggest Monday.com, it offers a familiar spreadsheet database and is surprisingly flexible, so you have workspaces which contain tables which contain groups of items. The tables define the schemas, the groups are partitions within the schema and the items are configurable data rows which have typed fields. You get all the standard types plus location, file, drop down list, check box and status, etc. Everything is color coded and so basically once youre in a particular table you can add, remove and reorder fields, and this format applies to all the groups in the table simultaneously. The whole thing has a graphQL backend that you can hit from Elixir. One cool trick I did for a ex friend is I stored SQLite databases in a Files field, so we were doing a bunch of fluid management from an Android app and the app worked with a context specific SQLite file and once it was done with the task it uploaded it to Monday and the cool thing is Monday versions them so you get that for free. And the Elixir server can interact with the SQLite file but heads up I had all sorts of trouble trying to use the Elixir wrapper ExQlite I think it was called. I ended up ditching that and making my own GenServer that wrapped the SQLite CLI which I installed in the Docker image. I actually ended up getting complexity past my comfort level because I wanted strong typing so I had Elixir structs which I used to formalize the data flow, but these had to match the SQLite schemas and code churn would make things brittle because sometimes shit would silently break on the SQLite side, so I ended up having to make some pretty robust tests which created a fresh db from the schema which was stored in priv, populate it with test data then send it up to Monday as a SQLite .db file, request it back, dump it and compare. Kind of a pain in the ass but probably the right thing to do. Just tell your guys to start from the struct and move outward when altering schemas. Your struct can be checked quickly with the fast tests, but the schemas have to go through the whole upload/download cycle which takes a while. If you start to feel constrained excessively by SQLLite, check out UnQLite, it's small, noSQL and uses a JavaScript-like executable syntax and your data can be executable. This requires a different way of thinking about practically everything I said above with regard to structs so you may want to AB test it with two developers and see how the architecture evolves, just don't mix the two paradigms or you will enter a bifurcated data hellscaoe and one or both of the AB developers will murder you in your sleep. Make utility functions that pull data from Monday so that from IEx they can inspect the cloud data interactively from dev and test. You can use ChatGPT to make the structs by giving it the text output of the utility and telling it to make an Elixir struct for it. Anyway back in the Monday side, you can do cool stuff like make the live view sync up style wise with the Monday tables, what I did is took a screenshot of a drop down box for example and tell ChatGPT to make Tailwind styles for the colors then over in the live view pull that specific style and it will match, pretty pro-style and your non programmer users will like it because it's more intuitive since they will be working both in a Monday table as well as your Phoenix UI and they will be like why the fuck is this color weird and you can explain that it is two completely different systems with different render logic but they will not give two fucks, they don't care what a pain in the ass this is, so it's better to just harmonize things and keep them from bitching. For the visuals I would just use a LongText to hold an AI image prompt which gets cooked and then transmitted to a projector associated with that node. Image prompt can be templated of course for correlation with live experiment state. When I refer to live experiment variables they can be human readable or PSI indexes that reference time space reality. Monday has actions that get triggered on data and interaction events and those can make API calls, they call them cloud functions or wait maybe that's Google, whatever, you can hit APIs so you expose functionality on your server with a JSON endpoint and that's hoe Monday talks back to your server, and of course you can use Zapier to hit anything from there. So anyway you will want tables for your system which will define experiment topology, system configuration, status display, run data and control panel functions. You can make control panels with any kind of display you want so maps, dials, graphs all that stuff. It'll be interesting to see how you guys integrate the GIS, because you will want a pixel accurate map which lets you see the precise location of the experiment features including node positions but you also need to configure shit per node per region per domain, I don't know, I hope you have a good front end guy. You could maybe use leaflet for the geo and Mermaid for the other domains. At one point I was editing KML files dynamically, Elixir rocks at templating, gir devs can basically work with a template as they would with html but it's got slots for Elixir code and data so it's very clean dev experience. I guess you could use Google Earth and overlay KML data. I've never used WebVR or whatever but if you have 3D then you can do an experimenter view where you can organize their spatial experience and I suppose play it in real time, then you could use VR for part off the experience. I had before my laptop died a setup where I was using Sound As Pure Form (sapf) programmatically generate sounds and then since you already have the associated spatial data you can do stuff like 3D sound mapped to not only the physical environment but also your model and so you could have programmatically defined soundscapes and this permits you to do combinatorial effects on the target like apply a dynamic binaural sound envelope. Just store the SAPF script in a Long text cell and when it gets edited it is transmitted down to the device for storage and execution, and of course since the scripts will be going through Elixir on the way down they can be templated and instantiated with context before final descent to the terrain level. Once you start defining your model using Nx tensors you will have a better idea of this dynamic. The programmer will have to develop an intuitive understanding of the tensor shape and the resulting effect in the user and this whole structure will need to be incorporated at the model root so that the experiments can define precisely the effect they want to project, yet not be overly brittle so that youre not constantly hacking in them to keep the experience from being jarring. Remember all this shit needs to maintain its structure from cloud to edge so you need to be careful about data structure design so that without too much ceremony you can draw cymatic patterns at the cloud UI level and have the spatial and structural dynamics obeyed down to the physical layer enforced by the GIS exoeriment substrate. I forgot you're not in Dallas or I would've said check out Watermark Church their system is almost IMAX level resolution in three floor to ceiling panels on stage, I don't know how far you're gonna take this thing but it could be used for public performances not just timeline research. I mean if you set the experiment up in a forest you could for example have live animals that are perhaps drugged a little bit to not make them skittish and they could provide unscripted interactive behavior to the terrarium. You can use Eleven labs to generate voices and attach them to agents hosted in Elixir processes, this is one of the areas where Elixir shines because since it's running on the BEAM it can have millions of processes so your voices and sounds have lots of scams Anyway that's it for this first pass we'll see where we get to and decide if it needs to be expanded to handle other types of psychosocial dynamics with drones and pyrotechnics.