Replies: 5 comments 10 replies
-
I asked https://matrix.to/#/@edubot:matrix.org what parameters we might want to have a user specify, and this was its response: Here's a list of some of the most important musical parameters that could be included in such an interface:
These parameters can provide a good starting point for generating a piece of music, but there's always room for creativity and deviation from the initial parameters. |
Beta Was this translation helpful? Give feedback.
-
And then I asked it about adding computation to the mix... Me: EduBot:
Remember, these are just possibilities. The specific parameters would depend on the capabilities of the system and the goals of the user. |
Beta Was this translation helpful? Give feedback.
-
I’ve been following this discussion, and it’s fascinating how we’re exploring ways to combine generative AI with music ideation. I’d like to share some thoughts and ideas that might add value to this conversation: |
Beta Was this translation helpful? Give feedback.
-
@pikurasa I have made some progress with getting GPT-4 to help with musical ideas in Music Blocks. I set up a Node.js backend that works with the model. It takes user input from By tweaking the prompts and organizing the outputs better, I got more consistent musical sequences. Now, the model generates melodies, drum patterns, and turtle movements that sync up nicely. Plus, the organized JSON makes it easier to work with the outputs in the code. One issue I ran into was keeping the music consistent throughout each sequence. Adjusting the prompts helped, but I’m still trying to figure out how to keep a theme going or add variations without making the music sound random. Based on this setup, what do you think would be the best next step to improve the musical flow? And do you have any suggestion on how we could experiment with different generation methods (like Markov chains or neural networks) to make the music more interesting? |
Beta Was this translation helpful? Give feedback.
-
@pikurasa In my Demo the user chooses a few random notes on the PhraseMaker grid.The AI then completes the phrase in a way that forms a meaningful melody rather than just random sounds.Right now, percussion and movements are mandatory, ensuring rhythm and structure.You could make them optional to allow more flexibility in experimentation. To improve AI-generated music, we can use Markov Chains to predict the next note based on past sequences, ensuring patterns emerge naturally. Also usin neural networks we could make it learn famous composition to make it more engaging and fun for kids. demo.mp4Disclaimer: Sorry if the generated composition isn’t always melodic—I'm still refining the model. With some tweaks, I can improve it to generate more meaningful music. I'm also researching open-source models to enhance the AI’s understanding of musical patterns and theory. |
Beta Was this translation helpful? Give feedback.
-
Let's workshop the design for Music Ideation Through Generative Idea together.
Beta Was this translation helpful? Give feedback.
All reactions