You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: tutorials.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -97,18 +97,18 @@ This tutorial will walk through the creation of MIR baselines programmed live, i
97
97
98
98
**Biography of Presenters**
99
99
100
-
**Rachel Bittner** is a Senior Research Scientist at Spotify in Paris. She received her Ph.D. in Music Technology in 2018 from the Music and Audio Research Lab at New York University under Dr. Juan P. Bello, with a research focus on deep learning and machine learning applied to fundamental fre- quency estimation. She has a Master's degree in mathematics from New York University's Courant Institute, as well as two Bachelor's degrees in Music Performance and in Mathematics from the University of California, Irvine. In 2014-15, she was a research fellow at Telecom ParisTech in France after being awarded the Chateaubriand Research Fellowship. From 2011-13, she was a member of the Human Factors division of NASA Ames Research Center, working with Dr. Durand Begault. Her research interests are at the intersection of audio signal processing and machine learning, applied to musical audio. She is an active contributor to the open-source community, including being the primary developer of the pysox and mirdata Python libraries.
100
+
**Rachel Bittner** is a Senior Research Scientist at Spotify in Paris. She received her Ph.D. in Music Technology in 2018 from the Music and Audio Research Lab at New York University under Dr. Juan P. Bello, with a research focus on deep learning and machine learning applied to fundamental frequency estimation. She has a Master's degree in mathematics from New York University's Courant Institute, as well as two Bachelor's degrees in Music Performance and in Mathematics from the University of California, Irvine. In 2014-15, she was a research fellow at Telecom ParisTech in France after being awarded the Chateaubriand Research Fellowship. From 2011-13, she was a member of the Human Factors division of NASA Ames Research Center, working with Dr. Durand Begault. Her research interests are at the intersection of audio signal processing and machine learning, applied to musical audio. She is an active contributor to the open-source community, including being the primary developer of the pysox and mirdata Python libraries.
101
101
102
-
**Mark Cartwright** is an Assistant Professor at New Jersey Institute of Technology in the Department of Informatics. He completed his PhD in computer science at Northwestern University as a member of the Interactive Audio Lab, and he holds a Master of Arts from Stanford University (CCRMA) and a Bachelor of Music from Northwestern University. Before his current position, he spent four years as a researcher in the Music and Audio Research Lab (MARL) and the Center for Urban Science and Progress (CUSP) at New York University (NYU). His research lies at the intersection of human-computer interaction, ma- chine learning, and audio signal processing. Specifically, he researches human-centered machine listening and audio processing tools for creative expression with sound and understanding the acoustic world.
102
+
**Mark Cartwright** is an Assistant Professor at New Jersey Institute of Technology in the Department of Informatics. He completed his PhD in computer science at Northwestern University as a member of the Interactive Audio Lab, and he holds a Master of Arts from Stanford University (CCRMA) and a Bachelor of Music from Northwestern University. Before his current position, he spent four years as a researcher in the Music and Audio Research Lab (MARL) and the Center for Urban Science and Progress (CUSP) at New York University (NYU). His research lies at the intersection of human-computer interaction, machine learning, and audio signal processing. Specifically, he researches human-centered machine listening and audio processing tools for creative expression with sound and understanding the acoustic world.
103
103
104
-
**Ethan Manilow** is a PhD candidate in Computer Science at Northwestern University under advisor Prof. Bryan Pardo. His research lies in the inter- section of signal processing and machine learning, with a focus on source separation, automatic music transcription, and open source datasets and applications. Previously he was an intern at Mitsubishi Electric Research Labs (MERL) and at Google Magenta. He is one of the lead developers of nussl, an open source audio separation library. He lives in Chicago, where he spends his free time playing his guitar and smiling at dogs he passes on the sidewalk.
104
+
**Ethan Manilow** is a PhD candidate in Computer Science at Northwestern University under advisor Prof. Bryan Pardo. His research lies in the intersection of signal processing and machine learning, with a focus on source separation, automatic music transcription, and open source datasets and applications. Previously he was an intern at Mitsubishi Electric Research Labs (MERL) and at Google Magenta. He is one of the lead developers of nussl, an open source audio separation library. He lives in Chicago, where he spends his free time playing his guitar and smiling at dogs he passes on the sidewalk.
105
105
106
106
### 6. Teaching Music Information Retrieval
107
107
108
108
The research field of Music Information Retrieval (MIR) has a history of more than 20 years. During this time many different tasks have been defined and a variety of algorithms have been proposed. MIR topics are taught around the world in a variety of settings both in academia and industry. The teaching of MIR takes many forms ranging from teaching regular undergraduate and graduate courses to delivering specialized tutorials, seminars, and online courses. MIR is a fundamentally interdisciplinary topic and that creates unique challenges when it is taught. The goal of this tutorial is to cover various topics of interest to people involved with teaching MIR. The material covered is informed by modern pedagogical practices and how these practices can be adapted to address the unique characteristics of learning about MIR. The global Covid pandemic has resulted in increased activity and interest about online learning. Advice and guidelines for effective online teaching of MIR will also be provided. The presented concepts and ideas will be illustrated using concrete examples and use cases drawn from extensive experience of the tutorial presenter with teaching MIR in a variety of settings. Although this is not the primary focus of the tutorial, these examples can also serve as an introduction to MIR for participants that are new to the field.
109
109
110
110
**Biography of Presenters**
111
111
112
-
**George Tzanetakis** is a Professor in the Department of Computer Science with cross-listed appointments in ECE and Music at the University of Victoria, Canada. He was Canada Research Chair (Tier II) in the Computer Analy- sis and Audio and Music from 2010 to 2020. In 2012, he received the Craigdaroch research award in artistic expres- sion at the University of Victoria. In 2011 he was Visiting Faculty at Google Research. He received his PhD in Com- puter Science at Princeton University in 2002 and was a Postdoctoral fellow at Carnegie Mellon University in 2002- 2003. His research spans all stages of audio content analysis such as feature extraction,segmentation, classification with specific emphasis on music information retrieval.
112
+
**George Tzanetakis** is a Professor in the Department of Computer Science with cross-listed appointments in ECE and Music at the University of Victoria, Canada. He was Canada Research Chair (Tier II) in the Computer Analysis and Audio and Music from 2010 to 2020. In 2012, he received the Craigdaroch research award in artistic expression at the University of Victoria. In 2011 he was Visiting Faculty at Google Research. He received his PhD in Computer Science at Princeton University in 2002 and was a Postdoctoral fellow at Carnegie Mellon University in 2002- 2003. His research spans all stages of audio content analysis such as feature extraction,segmentation, classification with specific emphasis on music information retrieval.
113
113
114
-
He has designed and developed for Kadenze Inc. the first widely available online program in Music Information Retrieval consisting of 3 courses that were launched in December 2020. More than 2000 students from around the world have been involved with the program. He is also the primary designer and developer of Marsyas an open source framework for audio processing with specific empha- sis on music information retrieval applications. His pioneer- ing work on musical genre classification received a IEEE sig- nal processing society young author award and is frequently cited. He has given several tutorials in well known interna- tional conferences such as ICASSP, ACM Multimedia and ISMIR. More recently he has been exploring new interfaces for musical expression, music robotics, computational eth- nomusicology, and computer-assisted music instrument tu- toring. These interdisciplinary activities combine ideas from signal processing, perception, machine learning, sensors, ac- tuators and human-computer interaction with the connect- ing theme of making computers better understand music to create more effective interactions with musicians and listen- ers. More details can be found http://www.cs.uvic.ca/ gtzan.
114
+
He has designed and developed for Kadenze Inc. the first widely available online program in Music Information Retrieval consisting of 3 courses that were launched in December 2020. More than 2000 students from around the world have been involved with the program. He is also the primary designer and developer of Marsyas an open source framework for audio processing with specific emphasis on music information retrieval applications. His pioneering work on musical genre classification received a IEEE signal processing society young author award and is frequently cited. He has given several tutorials in well known international conferences such as ICASSP, ACM Multimedia and ISMIR. More recently he has been exploring new interfaces for musical expression, music robotics, computational ethnomusicology, and computer-assisted music instrument tutoring. These interdisciplinary activities combine ideas from signal processing, perception, machine learning, sensors, actuators and human-computer interaction with the connecting theme of making computers better understand music to create more effective interactions with musicians and listeners. More details can be found [http://www.cs.uvic.ca/gtzan](http://www.cs.uvic.ca/gtzan).
0 commit comments