Skip to content

Latest commit

 

History

History
740 lines (407 loc) · 58.9 KB

README.md

File metadata and controls

740 lines (407 loc) · 58.9 KB

MS_038_F21 | Machine Learning for Artists

Professor Douglas Goodwin

MW 09:35-10:50AM

Scripps Campus, Steele Hall, 229

monalisa_hinton

DESCRIPTION

Machine learning (ML) is a new branch of computer science that provides services for automatic translation and speech recognition (Apple's Siri, Amazon's Alexa, Google Assistant), product recommendations (Netflix, Amazon, etc.), transportation (Waymo, Tesla, the City of Copenhagen), and political campaigns (Facebook and Cambridge Analytica). ML is becoming a familiar presence in our lives; computer scientists and developers introduce new applications every day for chatting with humans, recommending the best course of action, and making predictions about the future. In spite of all the press, ML remains daunting to non-specialists. This class seeks to mend this divide. This class will introduce ML concepts to students without prior experience and provide templates to get students working in ML right away.

We will study and remake artworks by Mario Klingemann, Anna Ridler, Sougwen Chung, Memo Akten, Helena Sarin, Tom White, and others. We will try several techniques and frameworks such as image segm entation, CycleGAN, pix2pix, and Tensorflow. Students will propose and work on a larger project in the last third of the class.

Prerequisite: Any experience with programming, especially with p5.js, ml5.js, and Python.

QUESTIONS

  • What is machine intelligence and artificial intelligence?
  • What are examples of machine intelligence?
  • How does AI impact you?
  • Are you excited about AI, concerned about it, or both?
  • What topics in AI interest you?

Required Instructional Materials

Free P5.js editor account

To work on ml5.js in an online context.

An online development platform. Our syllabus lives on GitHub. Essential.

Free Google Colab account with Google Drive

To run ML code in the cloud. Integrate is with Google Drive for the full experience!

Recommended Material

The Information: A History, a Theory, a Flood, James Gleick

Aesthetic Programing, by Winnie Soon & Geoff Cox, http://aesthetic-programming.net.

Assessment

40% Weekly assignments

40% Final project (proposal and completion)

20% Attendance + Participation

Final Project Grading is based on the following factors:

  1. Perseverance
  2. Faithfulness to the proposal
  3. Creative engagement
  4. Application of critical thinking
  5. Preparation (research, accumulation of needed materials, time management)
  6. Success of the finished piece

Course and Institutional Policies

COVID-19 Policies:

Please wear your masks during class. Please consult https://www.scrippscollege.edu/scripps-strong/return-to-campus-plan/ for the latest information.

Attendance Policy:

You may miss up to four classes and still pass this class. Note that 60 percent of the students' grade is given for work on in-class activities, and that these cannot be repeated except in extraordinary circumstances.

Participation Policy:

You are expected to be attentive, ask questions, work alone and with a partner to complete your work.

Late Assignment and Missed Exam Policy:

Labs and in-class activities will not be repeated except in extraordinary circumstances.

Academic Integrity:

Students are expected to abide by the Scripps College academic integrity code. You must submit work that is your own and which is original work for this class. Also, all sources must be documented. Omission of sources is considered plagiarism, even if it is an oversight and/or unintentional. All plagiarism will be reported to the department and Dean’s office for further action. For this course, collaboration is allowed in on lab activities and assignments IFF all contributions are documented.

Permissible cooperation should never involve one student having possession of a copy of all or part of work done by someone else, in any form (e.g. email, Word doc, Box file, Google sheet, or a hard copy). Also, assignments that have been previously submitted in another course may not be submitted for this course, and I discourage you from finding solutions on Stack Overflow or other online forums to paste into your notebooks.

Accommodations for Students with Disabilities:

Scripps students seeking to register to receive academic accommodations must contact Academic Resources and Services Staff (ARS) at ars@scrippscollege.edu to formalize accommodations. Students will be required to submit documentation and meet with a staff member before being approved for accommodations. Once ARS has authorized academic accommodations, a formal notification will be sent out.

A student’s home campus is responsible for establishing and providing accommodations. If you are not a Scripps student, you must contact your home institution to establish accommodations. Below is a list of coordinators on the other campuses:

CMC - Julia Easley, julia.easley@claremontmckenna.edu

Harvey Mudd – Deborah Kahn, dkahn@hmc.edu

Pitzer- Gabriella Tempestoso, gabriella_tempestoso@pitzer.edu

Pomona - Jan Collins-Eaglin, Jan.Collins-Eaglin@pomona.edu

Inclusivity Statement:

This class is an example of Scripps College’s commitment to changing the norms in Computer Science. Creating this initiative at a liberal arts women's college is both a bold step towards correcting gender imbalance in this field.

Our community represents a wide variety of backgrounds and perspectives. We are committed to providing an atmosphere for learning that respects diversity.

Institutional Policies:

Students are responsible for reviewing Scripps College’s policies on incomplete grades, sexual misconduct, adverse weather, as well as student evaluation of instruction, and days of special concern/religious holiday.

ASSIGNMENTS (updated Sept. 03)

W Date theme Reading (read before class) videos (watch before class) Assignment (complete before class)
1 08/30 Introduction to Machine Learning with ml5.js
09/01 Awesome Machine Learning Art AI4D workshop links
2 09/06 LABOR DAY Make a list of projects and people that interest you.
09/08 Image Classification with Teachable Machine What is a Teachable Machine? Teachable Machine 1: Image Classification
3 9/13 Tracking bodies Hello-ml5 Nostalgia -> Art -> Creativity -> Evolution as Data + Direction, Alyosha Efros Submit a project idea to Discord using Pose Estimation.
09/15 ML5js: PoseNet ml5.js Pose Estimation with PoseNet, Dan Shiffman
4 09/20 Tracking faces and hands https://learn.ml5js.org/#/reference/face-api | https://learn.ml5js.org/#/reference/handpose How Machine Learning Can Benefit Human Creators, Dr.Rebecca Fiebrink Submit a project idea to Discord that uses hand or face tracking.
09/22 Shiffman: ml5.js Pose Estimation with PoseNet
5 09/27 Style Transfer Ml5.js StyleTransfer The Art Of Deception - Encountering Perception as a Creative Material, Shiry Ginosar Submit a project idea to Discord that combines style transfer with pose estimation or hand/face tracking.
09/29 Gene Kogan: Visualization, deepdream, style & texture synthesis
6 10/04 pix2pix Gene Kogan's pix2pix tutorial A Socratic debate, Alyosha Efros and Phillip Isola Submit a project idea to Discord that uses pix2pix with pose estimation or hand/face tracking.
10/06
7 10/11 DIY Neural Networks Neural Abstractions, Tom White Submit a project idea to Discord that uses a DIY Neural Network.
10/13
8 10/18 FALL BREAK
10/20

Outline

W Date theme Slides Reading videos Demo Assignment Activity Art
1 08/30 Introduction to Machine Learning with ml5.js intro_01.pdf Hilary Mason explains machine learning to 5 different people A Beginner's Guide to Machine Learning with ml5.js Image classifier on an image Image classifier with webcam Object Detection-YOLO-Webcam Intro to ml5, Dan Shiffman Image classifier with ml5 and MobileNet, Dan Shiffman Introductions, Discussion Petra Cortright, VVEBCAM 2007
09/01 Awesome Machine Learning Art | AI4D workshop links Anna Ridler, Nicer Tuesdays ml5.js examples tf.js examples How to host p5 sketch on github pages ml5 image classifier webcam (also works on the mobile phone, full screen) Build on top of the image classifier example(demo) from the coding session. Publish it on your blog / GitHub. Add your homework link to the list below. Make something new using one example from the collected ml5js examples https://learn.ml5js.org/#/ Laws of Ordered Form, Anna Ridler
2 09/06
09/08 Image Classification with Teachable Machine What is a Teachable Machine? KNN image classifier? How does Teachable Machine work? Lauren McCarthy Explores Surveillance and Relationships Existing projects about transfer learning using a webcam How to build a Teachable Machine with TensorFlow.js, Nikhil Thorat Coding train: ml5.js: KNN Classification parts 1- 3, Dan Shiffman Add new outputs to the KNN Image Classifier example, mix it with videos, games, or physical computing. Publish your project on GitHub or your own blog, or record a video and put it on your blog. Add your project link below.
3 09/13, Tracking bodies What is PoseNet? Can you use PoseNet + KNN Image Classifier? What is BodyPix body segmentation? *The Shape of Art History in the Eyes of the Machine*, Ahmed Elgammal Real-time Human Pose Estimation in the Browser with TensorFlow.js, *Dan Oved*, freelance creative technologist at Google Creative Lab* Introducing BodyPix: Real-time Person Segmentation in the Browser with TensorFlow.js, *Dan Oved* Hour of Code with p5.js and PoseNet, Dan Shiffman ml5.js Pose Estimation with PoseNet, Dan Shiffman Make a KNN Image Classifier Body, Movement, Language: AI Sketches With Bill T. Jones
09/15 PoseNet PoseNet + KNN Image Classifier BodyPix Maya Man Interview : p5.js | Diversity with Code + Art Series Build an interactive browser experiment with Webcam Poses data. Maya Man’s PoseNet Sketchbook
4 09/20 Tracking faces and hands Face detection Face recognition Face landmark detection A Socratic debate, Alyosha Efros and Phillip Isola Just a dude who hacks: face-api.js playground face-api.js — JavaScript API for Face Recognition in the Browser with tensorflow.js Introducing BodyPix: Real-time Person Segmentation in the Browser with TensorFlow.js
09/22 How Machine Learning Can Benefit Human Creators, Dr.Rebecca Fiebrink Play with Just a dude who hacks: face-api.js playground, replace images Detect faces Recognize faces Detect and track face landmark
5 09/27 Style Transfer What is Style Transfer? How does it work? What do Neural Networks see? TBD The Art Of Deception - Encountering Perception as a Creative Material, Shiry Ginosar TBD TBD
09/29 Explorations in AI for Creativity, Devi Parikh Style Transfer on one image Style Transfer on webcam images Photo Styles Transfer with Runway (Need to open Runway, and run 'FastPhotoStyle' model on localhost:8000) Train your own fast style transfer model and run the model in the browser with ml5.js Or use any of the ml5's pre-trained Style Transfer models to create a new sketch Publish your project on GitHub or your own blog, or record a video and put it on your blog. Add your project link below. Train a new Style Transfer model Run a Style Transfer model in ml5.js
6 10/04 pix2pix What is pix2pix? How does it work? Applications of pix2pix Running pix2pix with ml5: demo Read this tutorial Pix2Pix with Tensorflow, TF team Nostalgia -> Art -> Creativity -> Evolution as Data + Direction, Alyosha Efros TBD TBD
10/06 AI+Creativity, an Art Nerd's Perspective, Jason Bailey Try training Pix2Pix with CMP Facade Database in Google Colab. You can find other datasets here Running pix2pix with ml5.js Setup Spell.run training environment Prepare dataset for pix2pix Training a new pix2pix model
7 10/11 DIY Neural Networks Neural Abstractions, Tom White
10/13
8 10/18,10/20
9 10/25, 10/27 ML on Google. Colab, CycleGAN Efficient GANs, Jun-Yan Zhu
Magenta: Empowering creative agency with machine learning, Jesse Engel
10 11/01, 11/03 Embedding bias (Google) / Adversarial attacks Artificial Biodiversity, Sofia Crepso & Feileacan McCormick
"Creative-Networks", Joel Simon
11 11/08, 11/10 BigGAN, latent spaces, GLOW, RNNs Sequence Modeling: Recurrent and Recursive Nets presented, Ian Goodfellow
12 11/15, 11/17 First Order Motion Model
13 11/22, 11/24 Projects
14 11/29, 12/01 Projects
15 12/06, 12/08 Projects
16

1. TEACHABLE MACHINE + P5.JS

We begin by using the GPU in the browser.

Homework

WATCH

Intro to ml5, Dan Shiffman

Image classifier with ml5 and MobileNet, Dan Shiffman

ml4w-homework: How to push code to a Github Repo and host sketch on Github

Video: How to host p5 sketch on GitHub pages

CODE

Anna Ridler, Investigations into the pathways between words, definitions and data

Build on top of the image classifier example(demo) from the coding session. Publish it on your blog / GitHub. Add your homework link to the list below.

Make something new using one example from the collected ml5js examples

Shiffman's ml5.js videos

Hilary Mason explains machine learning to 5 different people

A Beginner's Guide to Machine Learning with ml5.js

  1. ml5.js: Image Classification with MobileNet
  2. ml5.js: Webcam Image Classification
  3. ml5.js: Object Detection with COCO-SSD
  4. ml5.js: Transfer Learning with Feature Extractor
  5. ml5.js: Feature Extractor Classification
  6. ml5.js: Feature Extractor Regression
  7. ml5.js: Save/Load Model

  • Session 1 (M 08/30): Introduction to Machine Learning

  • Session 2 (W 09/01): Coding session:

    • Installing ml5.js
    • Running Image Classification example with ml5.js
    • Hosting p5 sketch on github or use p5 web editor
    • How to updating homework wiki
  • Video:

  • Coding:

    • Build on top of the image classifier example(demo) from the coding session. Publish it on your blog / GitHub. Add your homework link to the list below.
    • Or try any of the ml5js examples, make something based on any of these examples.

Watch Dan Shiffman's videos before class on Wednesday.

  1. Teachable Machine 1: Image Classification
  2. Teachable Machine 2: Snake Game

We will start by using Teachable Machine, "a web-based tool that makes creating machine learning models fast, easy, and accessible to everyone." This experiment from Google to bring a no-code and low-code approach to training AI models.

This teaching tool from Google demonstrates a machine learning workflow that you will use throughout the course:

  1. Gather images, sound, etc.
  2. Train the model
  3. Preview and Save/Export

It may be designed for High School STEM programs, but that doesn't mean that you can't make something interesting with Teachable Machine. Let's try it!

Exercises

Exercise 1. Train teachable machine to learn the difference between red and green. Export your model for later.

Exercise 2. Train Teachable Machine to recognize your voices. Make a different class for each person--we can use the model to identify who is speaking. Save/Export your model to use in. p5.js.

Exercise 3. Let's do something interesting with your voice classifier. Watch this video showing a collective game of PONG. Could you play PONG with your classifiers? Amy Goodchild reproduced the experiment in 2018.

(Note that Carpenter returned to SIGGRAPH with an. airplane simulation. Could you fly an airplane collectively?)

Dan Shiffman's video on using Teachable Machine will help you with the next step (whatever that might be!). Let's try making a game!

ML in the browser with ml5.js

Leverage your knowledge (and models) to make new projects and games.

by using a browser-based machine learning library called ml5.js. ml5.js provides access to machine learning algorithms and models in the browser, building on top of TensorFlow.js with no other external dependencies.

The library is supported by code examples, tutorials, and sample datasets with an emphasis on ethical computing. Bias in data, stereotypical harms, and responsible crowdsourcing are part of the documentation around data collection and usage. We're building friendly machine learning for the web - we're glad you're here!

ml5.js is heavily inspired by Processing and p5.js.

2. Image Classification

  • Session 1 (M 09/06): NO CLASS: Labor Day

  • Session 2 (W 09/08): Coding session:

    • Make a KNN Image Classifier
  • Reading:

  • Video:

    • Coding train: ml5.js: KNN Classification part 1- 3
  • Coding:

    • Add new outputs to the KNN Image Classifier example, mix it with videos, games, or physical computing. Publish your project on GitHub or your own blog, or record a video and put it on your blog. Add your project link below.
  • Object Detection Example Andreas Refsgaard + Lasse Korsgaard

Homework

  • Reading:

How to build a Teachable Machine with TensorFlow.js, Nikhil Thorat

Coding train: ml5.js: KNN Classification parts 1- 3, Dan Shiffman

  • Coding:

Add new outputs to the KNN Image Classifier example, mix it with videos, games, or physical computing. Publish your project on GitHub or your own blog, or record a video and put it on your blog. Add your project link below.

3. Tracking the body

Artist projects

Homework

Reading:

Real-time Human Pose Estimation in the Browser with TensorFlow.js, Dan Oved, freelance creative technologist at Google Creative Lab*

Introducing BodyPix: Real-time Person Segmentation in the Browser with TensorFlow.js, Dan Oved

Playlist:

Hour of Code with p5.js and PoseNet, Dan Shiffman

ml5.js Pose Estimation with PoseNet, Dan Shiffman

Coding:

Build an interactive browser experiment on Webcam Poses data

Publish your project on GitHub or your own blog, or record a video and put it on your blog. Add your project link below.

(sound classification)

##Hands and faces

  • Reading:

face-api.js — JavaScript API for Face Recognition in the Browser with tensorflow.js

Introducing BodyPix: Real-time Person Segmentation in the Browser with TensorFlow.js

  • Coding:

Build an interactive browser experiment on face or hand data

Publish your project on GitHub or your own blog, or record a video and put it on your blog. Add your project link below.

4. Style Transfer

  • Session 1 (M 10/04): Image Data

  • Session 2 (W 10/06): Coding session:

    • Training a new Style Transfer model
    • Running Style Transfer model in ml5.js
  • Coding:

    • Training a new Style Transfer model, Run this

      Google Colab. Watch this: video1 | video2

      • Notes:
        • Open the colab, make sure the GPU is enabled: Menu - Runtime - Change runtime type
        • Run through each cell, wait for each cell to finish running, make sure there is no error in each cell's output
        • Step 2 and 3 may take 1 and 2 hours to finish, keep the tab open and active while waiting(Power your computer while waiting)
        • Once step 2(download datasets) finishes, don't re-run it, becase it takes a long to finish
        • While running step 2(download dataset), it might notify you that "Disk is almost full", ignore that
    • Running Style Transfer model in ml5.js, p5 sketch

  • Watch these short videos:

  • Generate videos/images/GPT using Runway models

  • Or Build a sketch using Runway's model in p5.js or Processing

  • Or use Photoshop or Unity Runway plugin to process images or build a game scene

  • Add your blog/project link below

Homework:

  • Coding:

Train your own fast style transfer model and run the model in the browser with ml5.js

Or use any of the ml5's pre-trained Style Transfer models to create a new sketch

Publish your project on GitHub or your own blog, or record a video and put it on your blog. Add your project link below.

See demos live

Style Transfer on one image

Style Transfer on webcam images

Photo Styles Transfer with Runway (Need to open Runway, and run 'FastPhotoStyle' model on localhost:8000)

AdainStyleTransfer and bodyPix with Runway (Need to host AdainStyleTransfer model in Runway, and change the model url and auth in the code)

5. pix2pix

Homework:

Read this tutorial Pix2Pix with Tensorflow, TF team

Try training Pix2Pix with CMP Facade Database in Google Colab. You can find other datasets here

(runwayml)

6. DIY Neural Networks

7. Embedding bias (Google)

Watch this video from Google. Could you intentionally build bias into a ML model?

8. Adversarial attacks on ML models (B0RK)

Reproduce B0RK's experiments and turn the pandas into vultures.

5.

  • Session 1 (T 10/11): SketchRNN (Drawing)
  • Session 2 (W 10/13): charRNN (Text)

8: FALL BREAK

  • Session 1 (T 10/18): NO CLASS
  • Session 2 (W 10/20): NO CLASS
  • Session 1 (M 10/25): Introduction to Google Colab
  • Session 2 (W 10/27): Colab Model Workshop
  • Session 1 (M 11/01): Hosted Models and Networking
  • Session 2 (W 11/03): Generative Adversarial Networks, Interactive Image Synthesis
  • Session 1 (M 11/9): StyleGAN and Object Detection
  • Session 2 (W 11/11): GPT-2
  • Session 1 (M 11/16): Project Proposals 1
  • Session 2 (W 11/18): Project Proposals 2
  • Session 1 (M 11/23): Individual Meetings
  • Session 2 (M 11/25): Individual Meetings
  • Session 1 (M 11/30): TBD 1
  • Session 2 (W 12/2): TBD 2

14: Final Project Presentations

  • Session 1 (M 12/7): Group 1
  • Session 2 (W 12/9): Group 2

CODE OF CONDUCT

Please read and review the ITP/IMA Code of Conduct. The Code of Conduct will be reviewed and discussed as part of the course introduction.

The ITP/IMA Code of Conduct is an evolving work-in-progress document that establishes and communicates the commitment of the ITP/IMA community to uphold a key set of standards and obligations that aim to make ITP/IMA an inclusive and welcoming environment.

COURSE DESCRIPTION

An introductory course designed to provide students with hands-on experience developing creative coding projects with machine learning. The history, theory, and application of machine learning algorithms and related datasets are explored in a laboratory context of experimentation and discussion. Examples and exercises will be demonstrated in JavaScript using the p5.js, ml5.js, and TensorFlow.js libraries. In addition, students will learn to work with open source pre-trained models in the cloud using Runway. Principles of data collection and ethics are introduced. Weekly assignments, team and independent projects, and project reports are required.

COURSE OBJECTIVES

At the completion of this course, the student will:

  • Develop an intuition for and high level understanding of core machine learning concepts and algorithms, including supervised learning, unsupervised learning, reinforcement learning, transfer learning, classification, and regression.
  • Be able to apply machine learning algorithms to real-time interaction in media art projects using pre-trained models and “transfer learning” in JavaScript and related tools.
  • Learn how to collect a custom dataset to train a machine learning model and
  • Develop a vocabulary for critical discussions around the social impact and ethics of data collection and application of machine learning algorithms.
  • Become familiar with the current landscape of new media art generated from machine learning algorithms. Understand how to use a machine learning model to generate media: words, sound, and images.

EQUIPMENT

You will need a modern laptop (4 years old or younger is a good rule of thumb). Most required software is freely available. The department has all required commercial software installed on laptops available for checkout.

COURSE TEXTS

There is no textbook for the class. Readings and videos will be assigned on the individual session notes pages.

GRADING AND ATTENDANCE

Grades for the course will follow the standard A through F letter grading system and will be determined by the following breakdown:

  • 25% Participation
  • 50% Assignments (including reading responses and other written work)
  • 25% Final project

At most two (2) unexcused absences will be tolerated without effect to your grade. Any more than two (2) unexcused absences will result a lowering of your final grade by one whole grade for each unexcused absence. For example, three (3) unexcused absences will result in your highest possible grade being a B instead of an A. Four (4) unexcused absences will result in your highest possible grade being a C and so on. Six (6) unexcused absences would result in an automatic F for the course. Two (2) late arrivals will count for one (1) absence.

PARTICIPATION:

This class will be highly participatory. You are expected to contribute to discussions, engage in group work, give feedback to your peers, and otherwise fully participate in class.

TEACHING STYLE

COURSE SCHEDULE

The course will be two (2) times per week for one hour and thirty minutes (1:30) for a total of 14 weeks.

ASSIGNMENTS

There will be regular assignments that are relevant the class material. These assignments must be documented (written description, photos, screenshots, screen recording, code, and video all qualify based on the assignment) on a web platform such as a blog or google doc. You are required to link to your assignment from the course repo (you may choose to use a privately shared google doc or password protected website if you prefer.) The due dates are specified on the assignment page.

It is expected that you will spend 6 to 8 hours a week on the class outside of class itself. This will include reviewing material, reading, watching video, completing assignments and so on. Please budget your time accordingly.

Each assignment will be marked as complete (full credit), partially complete (half credit), or incomplete (no credit). To be complete an assignment should meet the criteria specified in the syllabus including documentation. If significant portions are simply not attempted or the assignment is turned in late (up to 1 week) then it may be marked partially complete. If it is more than a week late, not turned in, or an attempt isn’t made to meet the criteria specified it will be marked incomplete.

Responses to reading and other written assignments are also due in class one week after they are assigned and must also be submitted via the class website. Written assignments are expected to be 200 to 500 words in length unless otherwise specified. Grading will follow the same guidelines as above; on time and meeting the criteria specified will be marked as complete. Late (up to 1 week) or partially completed work will be given half credit. Work that is more than a week late, not turned in, or fails to meet the criteria specified will be given no credit.

Readings

LEARNING OBJECTIVES

  • Develop an intuition for and high level understanding of core machine learning concepts and algorithms, including supervised learning, unsupervised learning, reinforcement learning, transfer learning, classification, and regression.
  • Be able to apply machine learning algorithms to real-time interaction in media art projects using pre-trained models and “transfer learning” in JavaScript and related tools.
  • Learn how to collect a custom dataset to train a machine learning model and
  • Understand how to use a machine learning model to generate media: words, sound, and images.

VIDEO PLAYLIST

Beginner's Guide to Machine Learning in JavaScript with ml5.js Dan Shiffman's playlist provides an introduction to developing creative coding projects with machine learning. The theory and application of machine learning algorithms is demonstrated in JavaScript using the p5.js and ml5.js libraries.

Grant Sanderson source code

Dan Shiffman, Coding Train, NYU

  1. ml5.js: KNN Classification Part 1

  2. ml5.js: KNN Classification Part 2

  3. ml5.js: KNN Classification Part 3

  4. ml5.js: Train Your Own Neural Network

  5. ml5.js: Save Neural Network Training Data

  6. ml5: Save Neural Network Trained Model

  7. ml5: Neural Network Regression

  8. ml5.js: Sound Classification

  9. Teachable Machine 3: Sound Classifiication

  10. Coding Challenge #151: Ukulele Tuner with Machine Learning Pitch Detection Model

  11. ml5.js Pose Estimation with PoseNet

  12. ml5.js: Pose Classification with PoseNet and ml5.neuralNetwork()

  13. ml5.js: Pose Regression with PoseNet and ml5.neuralNetwork()

  14. ml5.js: Train a Neural Network with Pixels as Input

  15. ml5.js: What is a Convolutional Neural Network Part 1 - Filters

  16. ml5.js: What is a Convolutional Neural Network Part 2 - Max Pooling

  17. ml5.js: Training a Convolutional Neural Network for Image Classification

  18. Coding Challenge #158: Shape Classifier Neural Network with ml5.js

  19. ml5.js: Classifying Drawings with DoodleNet

READINGS

PRESENTATIONS

PROJECTS USING ML

https://stamen.com/work/penny/ - Penny // Stamen http://www.terrapattern.com/ - Terrapattern // Levin et al. https://www.move-lab.com/project/opendatacam/ - OpenDataCam // Move Lab http://type.aerial-bold.com/tw/ and https://www.kickstarter.com/projects/357538735/aerial-bold-kickstart-the-planetary-search-for-let // Aerial Bold - Groß & Lee http://lauren-mccarthy.com/pplkpr // pplkpr - McCarthy & MacDonald http://lauren-mccarthy.com/us - us+ // McCarthy & MacDonald https://www.xujenna.com/ITP/allegoryOf/index.php and https://www.xujenna.com/ITP/NYQuotient/index.php // Jenna Xu https://philippschmitt.com/ // Philipp Schmitt https://www.fluate.net/en/travaux/vectoglyph // vectoglyph - Nicolas Boillot https://kimalbrecht.com/vis_geisterstunde/#artificial-senses // Kim Albrecht https://mayaontheinter.net/billtjones/ // Maya Man https://pair.withgoogle.com/ // People + AI Guidebook // Google PAIR https://pair-code.github.io/what-if-tool/ - What If Tool // Google PAIR https://twitter.com/c_valenzuelab/status/979131716907536384 // Sidewalk orchestra // Cris Valenzuela https://yossarian.co/ // Youssarian - J.Paul Neeley http://cognimates.me/home/ // Cognimates // Stefania Druga + MIT http://mimionuoha.com/a-peoples-guide-to-ai // People's guide to AI // Mimi O