Show HN: A surprisingly effective way to predict token importance in LLM prompts https://ift.tt/AuMvwcJ

Show HN: A surprisingly effective way to predict token importance in LLM prompts We explored a novel method to gauge the significance of tokens in prompts given to large language models, without needing direct model access. Essentially, we just did an ablation study on the prompt using cosine similarity of the embeddings as the measure. We got surprisingly promising results when comparing this really simple approach to integrated gradients. Curious to hear thoughts from the community! https://ift.tt/8kbE2Ks September 12, 2023 at 12:29AM

Komentar

Postingan populer dari blog ini

Twin Peaks for All: Survey Results

Show HN: Guish – A GUI for constructing and executing Unix pipelines https://ift.tt/HrXz5ub

Launch HN: Riot (YC W20) – Phishing training for your team https://ift.tt/2QIueZL