Submission Number: 14
Submission ID: 10192
Submission UUID: 07f427ba-caac-4570-b8a8-933fe7a8887c

Created: Mon, 01/06/2025 - 10:56
Completed: Mon, 01/06/2025 - 10:56
Changed: Mon, 01/06/2025 - 13:02

Remote IP address: 10.64.6.7
Submitted by: mdear2
Language: English

Is draft: No
Enhancing the Capabilities of Not-So-Large Large Language Models with Autonomous Self-Prompting
Projected Timeline
Mon, 01/13/2025 - 00:00

This research project investigates the development of an autonomous, iterative pipeline designed to enhance the capabilities of small- to mid-sized large language models (LLMs) by enabling them to refine both system and user prompts for a given task. We explore strategies for self-prompting frameworks in which the LLM iteratively analyzes a user prompt, generates optimized system-level instructions, and refines the prompt to improve task-specific performance in terms of quality and relevance of the generated response.

The largest state-of-the-art LLMs often demonstrate superior capabilities. However, smaller models are more accessible for resource-constrained environments, such as edge and personal devices. Enhancing the utility of these lightweight LLMs without fine-tuning or expanding their parameter counts could significantly broaden their practical applications.

To assess the effectiveness of this self-prompting framework, the project will establish both qualitative and quantitative benchmarks, including NLP-based similarity scoring and task-specific performance metrics. The interdisciplinary scope will be supported through collaboration with subject matter experts across multiple academic fields to design diverse input prompts and evaluate the quality of the generated outputs.

While this project aims to expand the capabilities of resource-efficient LLMs, we also seek to identify reusable, LLM-specific strategies for improving prompt effectiveness across domains. The findings may later be integrated into existing self-correcting LLM pipelines for tasks involving scientific code generation, contributing to the further advancement of autonomous LLM techniques in scientific contexts.

Faculty, Student
Computer Science
Not required
This is a new research proposal for collaboration with interested undergraduate and graduate students at UIS. The ideas outlined here are closely related to current efforts at Argonne National Laboratory in delivering enterprise-scale secure and internal generative AI solutions as well as recent research published here: https://doi.org/10.1109/CLUSTERWorkshops61563.2024.00029

We are also interested in partnering with subject matter experts across disciplines at UIS to inform and evaluate experimentally observations of LLM performance for explored self-prompting strategies.

Interested students as well as faculty and staff may reach out to Matthew Dearing at mdear2@uis.edu for more information and opportunities.