Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About model training and prompt construction #1

Open
Dojay6103 opened this issue Mar 12, 2024 · 1 comment
Open

About model training and prompt construction #1

Dojay6103 opened this issue Mar 12, 2024 · 1 comment

Comments

@Dojay6103
Copy link

As a beginner, I found that the effect of relationship extraction was not very satisfactory after training. I would like to ask you what prompts you use during testing or what techniques you have for training dialogue construction? In addition, will your data set of more than 10,000 items be open sourced in the future?

@nanoponge
Copy link
Collaborator

As a beginner, I found that the effect of relationship extraction was not very satisfactory after training. I would like to ask you what prompts you use during testing or what techniques you have for training dialogue construction? In addition, will your data set of more than 10,000 items be open sourced in the future?

The instructions during fine-tuning also incorporate similar prompt words as in the data generation process. In addition, the effectiveness of the model would be improved if the data on the relationship extraction in the dataset were then manually corrected. Retraining on some open source information extraction LLMs might also optimize the generation results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants