The notebook in this folder teaches two powerful prompting techniques: chain of thought and ReAct (Reasoning + Acting).
ReAct (and its variants) are the current state-of-the-art prompting technique to improve LLM reasoning while minimizing hallucinations. And chain of thought is a relatively low-effort technique to improve prompt performance and robustness by adding verbal reasoning.
The notebook also covers LLM tools/actions, self consistency, zero-shot chain of thought, and some basics of how Langchain does ReAct.
To run the walkthrough and demonstration in the notebook you'll need access to a Google Cloud project with the Vertex AI API enabled.
If you have any questions or find any problems, please report through GitHub issues.