Divided training sentences into tokens using OpenAI-GPT and Bert tokenizers. Implemented multiple training models: Casual language model, Masked language model, and Sequence-to-sequence model using Pytorch.
-
Notifications
You must be signed in to change notification settings - Fork 0
tqdat712/Natural-Language-Processing
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Using multiple language models to complete sentence and summary paragraph.
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published