-
Pros
- Interface to easily compose components
- As one comment puts it, it’s a great way to let you realize what is possible with LLM workflows, before you build a more production grade one yourself
-
Cons
- Class and method sprawl
- seems like there are multiple ways to do the same thing
.invoke
,.predict
,.run
Tool
vs@tool
+llm.bind
- seems like there are multiple ways to do the same thing
- There’s a lot of encapsulation, but not enough abstraction imo
- limited documentation
- I’ve seen docs that talk about wrapping llm chains with other llm chains as backup
- I think overall, I may not have the proper mental model yet
- Online, people seem to be arguing about poor performance (ML- and engineering-wise)
- Feels weird that some code is in
langchain
, others inlangchain_core
, and yet others have their own provider-specific packagelangchain_openai
- Class and method sprawl
-
Questions
- How can I handle errors if my tool raises one?