Skip to main content

Symflower for LLMs

Symflower supports LLM development, benchmarking, LLM-based code generation workflows, and Retrieval-Augmented Generation (RAG):

  • Prompt engineering: Augment prompts with data from Symflower's analytics tools to provide the LLM with more context about the code.
  • Improving efficiency: Code repair makes LLM-generated code more useful by removing compile errors.
  • Training data curation: Using Symflower helps filter out projects with problematic code to provide higher-quality training data to the LLM.
  • LLM training, fine-tuning, RAG: Providing Symflower-generated tests or statically analyzed code to the LLM in a RAG scenario, during LLM training, or fine-tuning helps improve accuracy. Fine-tuning with Symflower showed an 8% improvement in a Java<-> Python transpiler.
  • Benchmarking: The DevQualityEval benchmark covers a variety of metrics to evaluate code quality and help find the most useful LLMs for the evaluated software development tasks.

Symflower's support for LLMs: