Learning Log: Tracking Daily Progress
I think that my workflow for publishing my daily learning log is such: Use my notebook to jot down ‘live’ notes as I read/watch/interact with things and then refine this into articles published on my main blog. This is pretty fast so far, but as I find that I write faster in Markdown I may start questioning why I switched over to WordPress to begin with… ¯\_(ツ)_/¯
Note: Pasting in the ascii for that shrug emoji required a lot of Markdown escape characters… Probably going to make an Atom snippet for that if I find myself often shrugging as I type.
Note 2: Publishing my livenotes is a safety net incase I don’t refine into an article!
I’ve been meaning to revisit this blogpost for quite awhile. Benn Jordan (The Flashbulb) is one of my favorite musicians and his rapid adoption of new technology is a source of inspiration. A few months ago I saw him post about exploring TensorFlow and using his compositions for the dataset.
I’ve recently become interested in machine learning/deep learning as related to arts and music, so I thought I’d revisit this!
“After all, there is no real push to innovate anymore. In the early days of the DAW, there was a climb to support more tracks, sample more realistically, and match, in hopes to replace, the reverbs and tones of expensive rack gear.”
- I completely understand this sentiment. When I started exploring music technology there was a lot of hacking involved to get software correctly configured (particularly Logic Pro!) I credit much of my technological problem solving skills to my early explorations with music technology!
The Kadenze course he mentions has been on my playlist for awhile. I’m excited to try it out as soon as finish up some other projects.
The most interesting takeaway for me is his desire to recreate himself musically via TensorFlow.
There are many things I need to become more fluent with before I can fully appreciate this:
- TensorFlow, Deep Dream, and machine learning paradigms (generally)
- How to curate and use a dataset of compositions
My reasons/goals for exploring this:
- Desire to get back into music and revisit my past compositions
- Combine my passion for programming/technology with my passion for music
- Explore generative arts/algorithmic composition
- Completely explore all the links in the blog post
- Listen to the examples
- Possibly reach out to him at some point?
This is the first video in a (relatively) new playlist by Dan Shiffman on the Coding Train. I started this video last week but wasn’t able to give it much attention. I’m revisiting it now because of an increased interest in understanding more about algorithmic creativity and machine learning. I initially held off on this series because I thought that it was centered around Processing (Java) and not P5 and thus not quite aligned with my current pursuits. However, I saw tonight that the most recent video IS using P5, so I want to dive in and start building a foundation in the concepts of machine learning and neural networks.
I retained some of the basics from last week, but I want to work through this series slowly. This will likely be an ongoing focus for awhile. I remember the introduction to the basic concept of a perceptron:input->(thing happens)-> output and that this is the core building block for neural networks. As the system becomes more complex, the inputs and outputs become more complex.
I also remember the notion of weight being added to the inputs as they pass into the perceptron. I’m looking forward to learning more about this and interacting with the series as I learn. Dan Shiffman is such a masterful (and inspirational!) teacher and I’m glad that he’s added neural networks to the already incredible Nature of Code content!
Accompanying reading: Nature of Code: Chapter 10 Neural Networks