Skip to content

Lightning-w/2023-Big-Data-Driven-Artificial-Intelligence

 
 

Repository files navigation

2023-Big-Data-Driven-Artificial-Intelligence

This is the source code and materials for Big Data Driven Artificial Intelligence course in BNU, 2023 Spring.

This course comprehensively introduces the latest developments in Big Data Driven Artificial Intelligence, including but not limited to neural networks, deep learning, reinforcement learning, causal inference, generative models, language models, and AI for scientific discovery.

Outline

Lecture-01: Introduction to Big Data and Artificial Intelligence;

  • Providing an overview of the history and different schools of Artificial Intelligence.
  • Covering the latest advancements in big data driven AI technologies.
  • Illustrating real-world applications such as ChatGPT and protein folding prediction.
  • References
     - Machine intelligence, Nature 521, 435 (28 May 2015).  |  Paper  |
     - Prediction and its limits, SCIENCE, 3 Feb 2017, Vol 355, Issue 6324 pp. 468-469.  |  Paper  |
     - AI TRANSFORMS SCIENCE, SCIENCE, VOLUME 357, ISSUE 6346, 7 JUL 2017.  |  Paper  |
     -《皇帝的新脑》, Roger Penrose;
     -《人工智能的未来》, Jeff Hawkins;
     -《为什么:关于因果关系的新科学》, 朱迪亚·珀尔 / 达纳·麦肯齐;

Lecture-02: Automatic Differentiation and PyTorch Programming;

  • Introducing automatic differentiation technique and its application scenarios.
  • Introducing the PyTorch automatic differentiation programming platform.
  • Providing an example of using PyTorch.
  • References
     - Automatic Differentiation in Machine Learning: a Survey.  |  Paper |  Code  |
     - Gumbel-softmax-based Optimization: A Simple General Framework for Optimization Problems on Graphs.  |  Paper  |  Code  |
     - Categorical Reparameterization with Gumbel-Softmax.  |  Paper  |  Code  |

Lecture-03: Fundamentals of Machine Learning;

  • What is machine learning and what are its simple classifications?
  • What are the basic steps of machine learning?
  • Performance evaluation and common issues in machine learning.
  • Introduction to simple feedforward neural networks and backpropagation algorithm.
  • References
     - A high-bias, low-variance introduction to Machine Learning for physicists.  |  Paper |  Code1  |  Code2  |

Lecture-04: Common Neural Network Architectures;

  • Basic and common neural network architectures and programming practices such as feedforward neural networks, convolutional neural networks, and recurrent neural networks.
  • Classification problems and practices in image processing and natural language processing.
  • Fundamental methods of data processing.

Lecture-05: Theory of Representation Learning;

  • Representation learning theory.
  • Representation learning and transfer learning.
  • Pre-training and transfer learning.
  • Examples of transfer learning in image tasks.
  • Introduction to word embedding techniques and their applications.

Lecture-06: From Deep Neural Networks to Neural ODE;

  • Numerical algorithms for solving ordinary differential equations.
  • Residual networks.
  • Principles of Neural ODE.
  • Application examples.
  • Optimal control and adjoint algorithm.

Lecture-07: Overview of Generative Models;

  • The difference between generative models and predictive models.
  • Classification of generative models.
  • Introduction to generative models, including GANs, VAEs, Normalizing Flow, and Diffusion Model.

Lecture-08: Data-Driven Modeling of Complex Systems;

  • Introduction to complex systems.
  • Modeling methods for complex systems.
  • Data-driven modeling methods for complex systems.
  • Complete closed loop system including decision-making and feedback.
  • Learning causal relationships.
  • Reinforcement learning framework based on world models.

Lecture-09: Graph Neural Networks;

  • Graph and Network.
  • Basic principles of Graph Neural Networks.
  • Basic applications of Graph Neural Networks.
  • Node classification.
  • Data-driven modeling of complex systems based on Graph Neural Networks.

Lecture-10: From Transformer to ChatGPT;

  • Attention mechanism.
  • Self-attention mechanism and network structure learning.
  • Introduction to Transformer architecture.
  • Applications of Transformer.
  • Self-supervised learning mechanism based on language models.
  • Introduction to architectures such as BERT, GPT-3, and ChatGPT.

Lecture-11: Causal Machine Learning;

  • Causation and Correlation.
  • Introduction to Causal Inference.
  • Introduction to Causal Discovery.
  • Causal Representation Learning.

Lecture-12: Reinforcement Learning;

  • Basic framework of reinforcement learning.
  • Classification of reinforcement learning.
  • Q-learning algorithm.
  • Deep reinforcement learning.
  • Reinforcement learning algorithms based on the World Model.
  • Causality and reinforcement learning.
  • Reinforcement learning and control/decision-making.

Sources

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%