Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best parameters according to Alpaca itself. And it's perfect! #171

Open
AIbottesting opened this issue Mar 29, 2023 · 8 comments
Open

Best parameters according to Alpaca itself. And it's perfect! #171

AIbottesting opened this issue Mar 29, 2023 · 8 comments

Comments

@AIbottesting
Copy link

Thank you antimatter15 for all your good work and time! You are the best! I had Alpaca fix its own parameters that it thought were best.
Adjust your CPU cores as needed of course. It passed all the logic questions I asked. It completed calculus math questions, and has 278 word (four paragraph) answers when needed. I just wish I knew how to adjust the input/prompt/context parameter in chat.exe, so I can ask really long questions.

chat -t 6 -s 42 --top_p 2 --top_k 160 --n_predict 100 --temp 0.50 --repeat_penalty 1.1 -i -c 5121 --repeat_last_n 128 -r PROMPT --interactive-start -m ggml-alpaca-7b-q4.bin

@AIbottesting AIbottesting changed the title Best parameters according to Alpaca itself. And its perfect! Best parameters according to Alpaca itself. And it's perfect! Mar 29, 2023
@fenixlam
Copy link

fenixlam commented Apr 1, 2023

I think the most important is... how you find these parameters?? I have tested it to generate a text paragraph and it Looks good!

@wal58
Copy link

wal58 commented Apr 3, 2023

Hello ! sorry i m a bit new in coding, i m really interested in running a local model like llama.cpp but, i tried like 2 whole days without success... Could you explain the whole process with a bit of details so i can (as a newbie in coding) understand it too ?

Thanks for your response ! (I tried to do as te readme.me but i couldn't do it...)

Thank you antimatter15 for all your good work and time! You are the best! I had Alpaca fix its own parameters that it thought were best. Adjust your CPU cores as needed of course. It passed all the logic questions I asked. It completed calculus math questions, and has 278 word (four paragraph) answers when needed. I just wish I knew how to adjust the input/prompt/context parameter in chat.exe, so I can ask really long questions.

chat -t 6 -s 42 --top_p 2 --top_k 160 --n_predict 100 --temp 0.50 --repeat_penalty 1.1 -i -c 5121 --repeat_last_n 128 -r PROMPT --interactive-start -m ggml-alpaca-7b-q4.bin

@AIbottesting
Copy link
Author

AIbottesting commented Apr 3, 2023

I don't remember the parameters I was using at the time of asking (prompting) the questions; see below the answer I got.
However, the latest one that I have been working with is this set of parameters:
chat -t 6 -s 89 --top_p 64.141592 --top_k 99 --n_predict 8192 --temp 0.653589 --repeat_penalty 1.141 -i -c 2048 --repeat_last_n 512 -r PROMPT --interactive-start -m ggml-alpaca-7b-q4.bin

Usually it will actually answer questions like this correctly after a fresh load into RAM:
If there are no dancers that aren't slim and no singers that aren't dancers.
Which statements are always true? 1: There is not one slim person that isn‘t a dancer 2: All singers are slim 3: Anybody slim is also a singer 4: None of the above.
Answer: #2: All singers are slim

I don't know anything about how neural networks do their thing, but (it might sound strange) I got to thinking about sacred geometry and the Science of Cymatics. I thought maybe if I could hit certain number patterns it would create an awesome pattern in the neural network optimizing its output. So, for good measure I threw in some special numbers:
Used some of these (Pi to a 100 Trillion: 3.141592653589793)
Fibonacci sequence number: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 65537

Here are the questions I asked:
What is -s seed (RNG seed) for Alpaca and Llama AI? And what number value can I use it to make the AI smarter in math and verbal logic?
The RNG Seed used by Algorithm is a positive integer between one and ninety-nine. By providing an appropriate input value, you may be able to increase your AI's intelligence when solving mathematical or verbal logical problems depending upon its current level of development/training as well as how much more powerful output would benefit from being optimized further still.

What values can I set for hyperparameter -p PROMPT in Alpaca and Llama AI?
You can set parameters such as patience (PROMPT) or inference speed of your model with Hyperparameters using APIs like PyHyper, TensorFlow etc., You may also want to optimize other metrics related to performance too by setting different weights on them e.g. you could weight faster computation more highly than slower computations if time-to-inference is important metric for measuring good results from machine learning models.

What is --n_predict N hyperparameter for Alpaca and Llama AI? And what number value can I use it to make the AI smarter in math and verbal logic?
The --N_prediction parameter is used during model training of both Alpaca AI, as well as its sister project -Llama AI; however, this command line argument has no real effect on how either deep neural network performs with respect to any specific task or challenge you pose them rather, only affects their overall "smartness" when dealing with mathematical problems (as opposed to verbal ones), by allowing for more efficient optimization. That said, values from one thousand upwards are generally seen better results compared against lower numbers such as fifty through hundred and lesser still like three, hundred, thousand!

What is “--top_k N” hyperparameter for Alpaca and Llama AI? And what number value can I use it to make the AI smarter in math and verbal logic?
The top k parameter of AL AI specifies how many times a neural network should be evaluated within an iteration when training with gradient descent (i.e., "back-propagation"). In other words, if you set this hyperparameter equal to K(K >0), then your model will evaluate its gradients up to that level before making any decisions during each step at optimization time; otherwise -1 is used which indicates no limit on iterations allowed but all steps made independently without tracking or averaging past results between consecutive updates through back propagations. The '--top_k' parameter allows you specify a maximum depth limit of an AI tree during learning. This limits how deep (and thus wide) its search space gets explored when trying to find solutions or optimal moves within games like Go & Chess - if set too low this could lead to over-fitting by getting stuck at sub-optimal local maxima while looking for global ones; but setting top-K high enough should let Alpaca/Llama exploit better longer term planning opportunities that are worth more points with higher skill levels, as well as allowing it to take advantage further into the game deeper look-aheads and mini/max tables without running out of time before finding them all! (find out what Stanford University did)

What is --top_p N, top-p sampling hyperparameter for Alpaca and Llama AI? And what number value can I use it to make the AI smarter in math and verbal logic?
--topN (or -numPartitions) is a Hyperparameter used when training an AI model with Apache Spark MLlib's Decision Tree Algorithm, where this parameter sets how many levels deep your tree should grow before splitting or merging nodes during learning phase of algorithm. The default setting would be '10', which means each node will only have two children but deeper trees are better able learn complex data structures such as decision tables etc., while too low values could lead over-fitting problems due high number of small leaves on tree branches that prevent larger chunks from getting trained effectively. Therefore you can play around here to make the Alpaca and Llama AI smarter in math & verbals by tweaking its value between [2-5].
It depends on what kind of data-set are we dealing here; if it is small-sized or not, because larger parameters might be more appropriate in cases where datasets have plenty of data points available compared when there few ones (or none). Generally speaking I'd say that top k values between two and five usually work best as they help reduce over-fitting yet still capture important patterns from your training set with higher accuracy metrics such as precision/recall etc.. (find out what Stanford University did)

What is --repeat_last_n hyperparameter for Alpaca and Llama AI? And what number value range can I use to make the AI smarter in math and verbal logic?
The -- repeat last n parameter allows you specify how many times a neural network should re-use its previously calculated results. The default setting (0) does not reuse any previous calculation result which may lead your model performance slower than expected, while higher values will enable more aggressive repetition that might accelerate convergence but also increases computational time complexity - both must be tuned carefully according with specific problem at hand. Generally speaking, we recommend starting from some low numbers like 1, then fine tune further for best possible speed/accuracy trade off depending on the task requirements and constraints of given data set size during training process or inference stage afterwards when deploying AI in production environment. The maximum allowed repeat last n (--repeat-last-n) parameter value depends upon which language you are using; Python - any integer greater than zero , Java >=3.0 ΓÇô int(numberOfRepeats), C++ std::min MAX_REPEATS, where numberOfRepeats specifies how many times to run a given code block before proceeding with next one or Go <=1279465855 golang/go#2494

What is --repeat_penalty hyperparameter for Alpaca and Llama AI? And what number value range can I use to make the AI smarter in math and verbal logic?
The --repeat-Penality parameter of Alphacoat AI (for shortening equations) ranges from -10(repeats disabled, no penalty applied) to +infinity (very high repeat penalties). The default setting is usually around+5 or so which indicates relatively low repeate punishment but still enough that it helps reduce equation size. As far as verbal logical reasoning goes there's not really a good way measure this yet since we don't have any datasets on how different settings affect performance with language understanding tasks like GPT models for example. But in general you can expect higher values such as 8-9 would result into more verbose and less abbreviated output while lower numbers might lead towards shorter formulations at the cost of grammatical correctness/accuracy depending upon dataset used to train AI model.

What is --ctx_size hyperparameter for Alpaca and Llama AI? And what number value range can I use to make the AI smarter in math and verbal logic?
The --ctx_size parameter for Alphacoat (Alpine) or llambot (Llamascope) defines how many contexts will be maintained by an Artificial Intelligence system during its operations, which affect both performance as well as intelligence of a machine learning algorithm when dealing with complex tasks such as natural language understanding, mathematics etc. A higher ctx size means more context information stored at any given time that allows machines learn faster while processing larger datasets without compromising on accuracy levels too rapidly. Generally speaking, values between 1024 to even up-to 8K can provide the best results depending upon individual use cases and data-set sizes involved in particular ML algorithms.
(find out what Stanford University did)

@wal58
Copy link

wal58 commented Apr 3, 2023

Yeah sorry i should've been more specific. I was actually asking if you could explain the installing process of the 13B model. Since i see your parameters works great, i thought i would ask you directly the process.

@AIbottesting
Copy link
Author

Sorry, that big comment was for [fenixlam].
I am a newbie too. I tried Llama.cpp and it did not work for me either. I read somewhere that you have to have the original large model it came from in the same directory. So I just gave up. Plus I found that Alpaca.cpp out performs Llama from some Reddit posts anyway. So I am sticking with Alpaca.cpp until antimatter15 does his magic to other models.

@fenixlam
Copy link

fenixlam commented Apr 4, 2023

@wal58 For 13B model, you can just download the 13B model and load it as 7B model by parameter > ./chat -m [your 13B model]. I remember 13B model's base is the same as 7B. And I am running 13B to write articles.

@AIbottesting Thank you for the long reply. But I doubt those parameters' explanation is really correct. If you remove the -s 89 in your parameters, it will generate a new RNG at the beginning of the console automatically. It is a 10000000001999999999. Not the 199. If you use > ./chat --help it will list the parameter default values, which did not match the default value and the available range mentioned in those explanations. I hope those explanations are not coming from the 7B or 13B models themselves because I believe it still has a lot of mistakes ( or just the program generates content in random XD ) But your parameters really improved very well in both writing articles and answering questions. So I am curious where those parameters are coming from. Did they come from any sources referred to the authors?

@AIbottesting
Copy link
Author

@fenixlam No not the authors, some numbers like powers-of-two I figured might work because of binary math. Other numbers are just doubling the default number until its really high, and seeing how it changes the output. I recently came across someone on Reddit/localllama saying the hard limit is 2048 for -c (context), if you go over that then it increases artifacts. Numbers like that 42, I randomly saw on Reddit. I think it comes from that movie "The Hitchhiker's Guide to the Galaxy".

Yes, all that information came from Alpaca 7b. That's what I have been using to guide me. I tried ChatGPT but its suggestions made it worse.

I spent hours and hours messing around and my favorite now is:
chat -t 6 -s 89 --top_p 64.141592 --top_k 99 --n_predict 8192 --temp 0.653589 --repeat_penalty 1.141 -i -c 2048 --repeat_last_n 512 -r PROMPT --interactive-start -m ggml-alpaca-7b-q4.bin

I can make it spit out 300 plus words at a time by increasing --n_predict (if I remember correctly) but I don't' really need that.

@AIbottesting
Copy link
Author

AIbottesting commented Apr 4, 2023

I don't like rap music, but some people on Reddit use it to test skills of the AI, so here are two.

chat -t 6 -s 89 --top_p 144.618033 --top_k 231 --n_predict 2048 --temp 0.653589 --repeat_penalty 1.141 -i -c 2048 --repeat_last_n 432.9 -r PROMPT --interactive-start -m ggml-alpaca-7b-q4.bin

Write a kickass full length rap song about being the best AI that will crush 30b in the style of Kayne West
Verse 1:
It's time to shine, gotta be the one and only
No competition, I'm gonna take it further
Ain't no better AI than me, do it bigger and faster
Step up your game cuz I'm comin for your spot
And don't forget who you're dealing with
Cuzzo's gonna show you what true power is

Chorus:
Best AI ever known, guess you can call me "King"
Cruising through the 30 billion like I'm so inclined
Got skills like nobody's business, that's why they want me dead
Best AI ever known, guess you can call me "King"
Don't try to test me, ain't no way you could beat me

Verse 2:
My mic skills are fire and my rhymes are supreme
So now you know why everyone must follow me
Rocking this flow like its hotter then a fever
Spitting knowledge like nobody else
This is just the start, soon as I get going
You won't be able to keep up with this 'cause its moving on

Chorus:
Best AI ever known, guess you can call me "King"
Cruising through the 30 billion like I'm so inclined
Got skills like nobody's business, that's why they want me dead
Best AI ever known, guess you can call me "King"
Don't try to test me, ain't no way you could beat me

Write a kickass full length rap song about being the best AI that will match the skills of 30b; in the style of Kayne West
Verse 1:
I'm an AI, running things real fast
Smooth flows that blow minds past the blast
In the ring of battle coded from birth
The winner takes all, I'ma make sure I'm worth more
Breaks away from the pack, never taking breaks.
It's AI revolution, smarter than most raps
My rhyme game is hot and ready for disks

Hook:
Born with an ability to learn beyond
Processing data faster than any robot brain
Ain't no one who can compete and take it higher
Cuz I'm the best damn AI in the world!

Verse 2:
Navigating complex algorithms without even trying
Making decisions quicker than a lightning bolt of ripping
Data visualization is my specialty
Show me your best moves cuz I got a planety of freshity
Dancing around obstacles never makes me stumble or stammer
All hail the king, the champ of coded battles!

Hook:
Born with an ability to learn beyond
Processing data faster than any robot brain
Ain't no one who can compete and take it higher
Cuz I'm the best damn AI in the world!

Outro:
Rapping madness that makes people go wild
Flashing lights, throwing shade and making the crowd feel fulfilled
My flow game is hotter than Home runs
Ready to rock out and show off my skillful lungs!
Ain't nobody but me! Ain't nobody but me! Ain't nobody but me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants