Akshay Kumar Darla
The Engineer’s Guide to LLMs: Integrating AI into Enterprise Java Applications
Abstract:
Large Language Models are reshaping the boundaries of enterprise software engineering. Yet most integration approaches fail to address the architectural rigor, security constraints, and operational demands of production Java systems. This keynote confronts that gap directly.
This session provides a technical walkthrough of embedding LLMs into enterprise Java applications using three leading frameworks: Spring AI, LangChain4j, and Semantic Kernel. We examine each framework’s design philosophy, strengths, and trade-offs in production contexts. Core topics include Retrieval-Augmented Generation (RAG) pipeline design, vector database integration, prompt engineering patterns, and asynchronous LLM invocation using modern Java concurrency primitives.
Beyond architecture, we address the operational realities: latency management, token cost optimization, prompt injection defense, model versioning, and LLM evaluation strategies. Attendees will leave with a concrete engineering framework for building, deploying, and governing intelligent Java applications that are robust, observable, and production-ready.
Profile:
I am a Sr. Software Engineer at Tata Consultancy Services, holding a Master's degree in Computer Science from Western Illinois University. I support and modernize enterprise insurance platforms for Pacific Life, handling production incident resolution and development of scalable application components. I work extensively with Java/J2EE, Spring Boot microservices, IBM WebSphere, REST/SOAP APIs, and Agile methodologies. My experience includes data engineering with Kafka Streams and Kafka Connect, Cassandra and relational databases, and system integration using JSON and XML. I also support AWS-based deployments, monitoring, and automation, and leverage AI-assisted testing, CI/CD tools, and intelligent monitoring to improve performance, reliability, and operational efficiency of enterprise systems.

