You can't. This is a limitation of LLM technology. They can output the most likely token sequence, but if "likely" doesn't match "correct" for your problem then there's nothing you can do.
Also, each LLM has its own definition of what "likely" is - it comes from the training and finetuning secret sauce of that particular LLM.
You can't. This is a limitation of LLM technology. They can output the most likely token sequence, but if "likely" doesn't match "correct" for your problem then there's nothing you can do.
Also, each LLM has its own definition of what "likely" is - it comes from the training and finetuning secret sauce of that particular LLM.