Northeastern University
Applied Natural Language Processing in Engineering Part 2

Ce cours n'est pas disponible en Français (France)

Nous sommes actuellement en train de le traduire dans plus de langues.
Northeastern University

Applied Natural Language Processing in Engineering Part 2

Ramin Mohammadi

Instructeur : Ramin Mohammadi

Inclus avec Coursera Plus

Obtenez un aperçu d'un sujet et apprenez les principes fondamentaux.
3 semaines à compléter
à 10 heures par semaine
Planning flexible
Apprenez à votre propre rythme
Obtenez un aperçu d'un sujet et apprenez les principes fondamentaux.
3 semaines à compléter
à 10 heures par semaine
Planning flexible
Apprenez à votre propre rythme

Compétences que vous acquerrez

  • Catégorie : Artificial Neural Networks
  • Catégorie : Natural Language Processing
  • Catégorie : PyTorch (Machine Learning Library)
  • Catégorie : Statistical Machine Learning
  • Catégorie : Applied Machine Learning
  • Catégorie : Algorithms
  • Catégorie : Deep Learning
  • Catégorie : Large Language Modeling
  • Catégorie : Machine Learning Methods

Détails à connaître

Certificat partageable

Ajouter à votre profil LinkedIn

Récemment mis à jour !

octobre 2025

Évaluations

21 devoirs

Enseigné en Anglais

Découvrez comment les employés des entreprises prestigieuses maîtrisent des compétences recherchées

 logos de Petrobras, TATA, Danone, Capgemini, P&G et L'Oreal

Il y a 7 modules dans ce cours

This module delves into the critical preprocessing step of tokenization in NLP, where text is segmented into smaller units called tokens. You will explore various tokenization techniques, including character-based, word-level, Byte Pair Encoding (BPE), WordPiece, and Unigram tokenization. Then you’ll examine the importance of normalization and pre-tokenization processes to ensure text uniformity and improve tokenization accuracy. Through practical examples and hands-on exercises, students will learn to handle out-of-vocabulary (OOV) issues, manage large vocabularies efficiently, and understand the computational complexities involved. By the end of the module, you will be equipped with the knowledge to implement and optimize tokenization methods for diverse NLP applications.

Inclus

1 vidéo13 lectures2 devoirs1 élément d'application

In this module, we will explore foundational models in natural language processing (NLP), focusing on language models, feedforward neural networks (FFNNs), and Hidden Markov Models (HMMs). Language models are crucial in predicting and generating sequences of text by assigning probabilities to words or phrases within a sentence, allowing for applications such as autocomplete and text generation. FFNNs, though limited to fixed-size contexts, are foundational neural architectures used in language modeling, learning complex word relationships through non-linear transformations. In contrast, HMMs model sequences based on hidden states, which influence observable outcomes. They are particularly useful in tasks like part-of-speech tagging and speech recognition. As the module progresses, we will also examine modern advancements like neural transition-based parsing and the evolution of language models into sophisticated architectures such as transformers and large-scale pre-trained models like BERT and GPT. This module provides a comprehensive view of how language modeling has developed from statistical methods to cutting-edge neural architectures.

Inclus

2 vidéos19 lectures4 devoirs

In this module, we will explore Recurrent Neural Networks (RNNs), a fundamental architecture in deep learning designed for sequential data. RNNs are particularly well-suited for tasks where the order of inputs matters, such as time series prediction, language modeling, and speech recognition. Unlike traditional neural networks, RNNs have connections that allow them to “remember” information from previous steps by sharing parameters across time steps. This ability enables them to capture temporal dependencies in data, making them powerful for sequence-based tasks. However, RNNs come with challenges like vanishing and exploding gradients which affect their ability to learn long-term dependencies. Throughout the module, you will explore different RNN variants such as Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRUs), which address these challenges. You will also delve into advanced training techniques and applications of RNNs in real-world NLP and time series problems.

Inclus

2 vidéos22 lectures2 devoirs1 élément d'application

This module introduces students to advanced Natural Language Processing (NLP) techniques, focusing on foundational tasks such as Part-of-Speech (PoS) tagging, sentiment analysis, and sequence modeling with recurrent neural networks (RNNs). Students will examine how PoS tagging helps in understanding grammatical structures, enabling applications such as machine translation and named entity recognition (NER). The module delves into sentiment analysis, highlighting various approaches from traditional machine learning models (e.g., Naive Bayes) to advanced deep learning techniques (e.g., bidirectional RNNs and transformers). Students will learn to implement both forward and backward contextual understanding using bidirectional RNNs, which improves accuracy in tasks where sequence order impacts meaning. By the end of the course, students will gain hands-on experience building NLP models for real-world applications, equipping them to handle sequential data and capture complex dependencies in text analysis.

Inclus

1 vidéo15 lectures4 devoirs

This module introduces you to core tasks and advanced techniques in Natural Language Processing (NLP), with a focus on structured prediction, machine translation, and sequence labeling. You will explore foundational topics such as Named Entity Recognition (NER), Part-of-Speech (PoS) tagging, and sentiment analysis and use neural network architectures like Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Conditional Random Fields (CRFs). The module will cover key concepts in sequence modeling, such as bidirectional and multi-layer RNNs, which capture both past and future context to enhance the accuracy of tasks like NER and PoS tagging. Additionally, you will delve into Neural Machine Translation (NMT), examining encoder-decoder models with attention mechanisms to address challenges in translating long sequences. Practical implementations will involve integrating these models into real-world applications, focusing on handling complex language structures, rare words, and sequential dependencies. By the end of this module, you will be proficient in building and optimizing deep learning models for a variety of NLP tasks.

Inclus

3 vidéos18 lectures4 devoirs

In this module we’ll focus on attention mechanisms and explore the evolution and significance of attention in neural networks, starting with its introduction in neural machine translation. We’ll cover the challenges of traditional sequence-to-sequence models and how attention mechanisms, particularly in Transformer architectures, address issues like long-range dependencies and parallelization, which enhances the model's ability to focus on relevant parts of the input sequence dynamically. Then, we’ll turn our attention to Transformers and delve into the revolutionary architecture introduced by Vaswani et al. in 2017, which has significantly advanced natural language processing. We’ll cover the core components of Transformers, including self-attention, multi-head attention, and positional encoding to explain how these innovations address the limitations of traditional sequence models and enable efficient parallel processing and handling of long-range dependencies in text.

Inclus

2 vidéos25 lectures3 devoirs2 éléments d'application

In this module, we’ll hone in on pre-training and explore the foundational role of pre-training in modern NLP models, highlighting how models are initially trained on large, general datasets to learn language structures and semantics. This pre-training phase, often involving tasks like masked language modeling, equips models with broad linguistic knowledge, which can then be fine-tuned on specific tasks, enhancing performance and reducing the need for extensive task-specific data.

Inclus

1 vidéo19 lectures2 devoirs

Instructeur

Ramin Mohammadi
Northeastern University
4 Cours531 apprenants

Offert par

En savoir plus sur Machine Learning

Pour quelles raisons les étudiants sur Coursera nous choisissent-ils pour leur carrière ?

Felipe M.
Étudiant(e) depuis 2018
’Pouvoir suivre des cours à mon rythme à été une expérience extraordinaire. Je peux apprendre chaque fois que mon emploi du temps me le permet et en fonction de mon humeur.’
Jennifer J.
Étudiant(e) depuis 2020
’J'ai directement appliqué les concepts et les compétences que j'ai appris de mes cours à un nouveau projet passionnant au travail.’
Larry W.
Étudiant(e) depuis 2021
’Lorsque j'ai besoin de cours sur des sujets que mon université ne propose pas, Coursera est l'un des meilleurs endroits où se rendre.’
Chaitanya A.
’Apprendre, ce n'est pas seulement s'améliorer dans son travail : c'est bien plus que cela. Coursera me permet d'apprendre sans limites.’
Coursera Plus

Ouvrez de nouvelles portes avec Coursera Plus

Accès illimité à 10,000+ cours de niveau international, projets pratiques et programmes de certification prêts à l'emploi - tous inclus dans votre abonnement.

Faites progresser votre carrière avec un diplôme en ligne

Obtenez un diplôme auprès d’universités de renommée mondiale - 100 % en ligne

Rejoignez plus de 3 400 entreprises mondiales qui ont choisi Coursera pour les affaires

Améliorez les compétences de vos employés pour exceller dans l’économie numérique

Foire Aux Questions