Using ChatGPT on Improving Program Performance with pprof and Benchmark

Wei Cheng Lei, Luo You Jian, Yan Wen Chen, Li Der Chou

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

In the context of limited computing resources, optimizing program architecture is crucial. Therefore, this paper proposes to apply the powerful analytical capabilities of large language models (LLM) to the field of systematic performance optimization. The output of pprof and benchmark is fed into ChatGPT, and the program's performance is improved based on feedback. In the case study, the number of memory allocations for the objective function was successfully reduced from 99 to 1, resulting in a reduction of the test execution time from 8.6 microseconds to 0.36 microseconds. At the same time, the memory allocation was also reduced from 53.5KB to approximately 1KB.

Original languageEnglish
Title of host publication2023 5th International Conference on Computer Communication and the Internet, ICCCI 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages256-260
Number of pages5
ISBN (Electronic)9798350326956
DOIs
StatePublished - 2023
Event5th International Conference on Computer Communication and the Internet, ICCCI 2023 - Fujisawa, Japan
Duration: 23 Jun 202325 Jun 2023

Publication series

Name2023 5th International Conference on Computer Communication and the Internet, ICCCI 2023

Conference

Conference5th International Conference on Computer Communication and the Internet, ICCCI 2023
Country/TerritoryJapan
CityFujisawa
Period23/06/2325/06/23

Keywords

  • Large Language Models
  • Performance analysis
  • Prompt Engineering
  • System optimization

Fingerprint

Dive into the research topics of 'Using ChatGPT on Improving Program Performance with pprof and Benchmark'. Together they form a unique fingerprint.

Cite this