
Explaining GPT-3 or GPT-Neo via SHAP values using Python
598
£170(approx. $226)
- Posted:
- Proposals: 9
- Remote
- #4043587
- OPPORTUNITY
- Archived
PPH #1 "Top Rated" Service Provider in Development & IT : Wordpress, Shopify, Magento, Squarespace, ZOHO, WHMCS, Salesforce, Vtiger, Learndash, Moodle

10188024968069910041068101974249614333527048810022679253732310104219





Description
Experience Level: Entry
Estimated project duration: less than 1 week
The main aims of this project are as follows:
- To explain large language models (GPT-3 or GPT-Neo) at a black-box level.
- Utilise pre-trained large language models to generate SHAP values that demonstrate their explanations.
- Apply fine-tuning, few-shot/zero-shot learning to the LLM to show how fine-tuning and applied learning can change the models’ explanations.
This project requires:
- A google Colab Python script to preprocess datasets by tokenising them into sequences and selecting appropriate input and output formats for each LLM, in order to facilitate the generation of SHAP values.
- Generate SHAP values for pre-trained LLMs (GPT-3 or GPT-Neo) before and after fine-tuning and prompt learning, in order to demonstrate the impact of these techniques on the models' explanations.
- Evaluate the models' explanations by comparing the SHAP values before and after fine-tuning and prompt learning are applied, and calculate EXPLANATION OVERLAP in order to determine the effect of these techniques in changing the models' explanations.
After generating the SHAP values before and after finetuning and few-shot/zero-shot learning, I need the explanation overlap to be calculated to demonstrate the change in explanations. PLEASE NOTE: this project is specifically for TEXT GENERATION.
Here is a link for more information on SHAP values for text generation: https://shap.readthedocs.io/en/latest/example_notebooks/text_examples/text_generation/Open%20Ended%20GPT2%20Text%20Generation%20Explanations.html
Please view attached file for more information and initial resources.
- To explain large language models (GPT-3 or GPT-Neo) at a black-box level.
- Utilise pre-trained large language models to generate SHAP values that demonstrate their explanations.
- Apply fine-tuning, few-shot/zero-shot learning to the LLM to show how fine-tuning and applied learning can change the models’ explanations.
This project requires:
- A google Colab Python script to preprocess datasets by tokenising them into sequences and selecting appropriate input and output formats for each LLM, in order to facilitate the generation of SHAP values.
- Generate SHAP values for pre-trained LLMs (GPT-3 or GPT-Neo) before and after fine-tuning and prompt learning, in order to demonstrate the impact of these techniques on the models' explanations.
- Evaluate the models' explanations by comparing the SHAP values before and after fine-tuning and prompt learning are applied, and calculate EXPLANATION OVERLAP in order to determine the effect of these techniques in changing the models' explanations.
After generating the SHAP values before and after finetuning and few-shot/zero-shot learning, I need the explanation overlap to be calculated to demonstrate the change in explanations. PLEASE NOTE: this project is specifically for TEXT GENERATION.
Here is a link for more information on SHAP values for text generation: https://shap.readthedocs.io/en/latest/example_notebooks/text_examples/text_generation/Open%20Ended%20GPT2%20Text%20Generation%20Explanations.html
Please view attached file for more information and initial resources.

Zoey C.
100% (5)Projects Completed
5
Freelancers worked with
4
Projects awarded
50%
Last project
10 Apr 2024
United Kingdom
New Proposal
Login to your account and send a proposal now to get this project.
Log inClarification Board Ask a Question
-
There are no clarification messages.
We collect cookies to enable the proper functioning and security of our website, and to enhance your experience. By clicking on 'Accept All Cookies', you consent to the use of these cookies. You can change your 'Cookies Settings' at any time. For more information, please read ourCookie Policy
Cookie Settings
Accept All Cookies