← Back to Index
- temp: 0.6 ctx: 16394 sim_id: 1719423109230691640
[ 0] 5791.10M 11B llama nous-hermes2:latest
[ 1] 7025.54M 13B llama everythinglm:latest
[ 2] 3919.46M 7B llama zephyr:latest
[ 3] 4575.23M 8.0B command-r aya:latest
[ 4] 3650.51M 7B llama deepseek-coder:6.7b
[ 5] 2282.36M 3.8B phi3 phi3:latest
[ 6] 2821.01M 3B phi2 dolphin-phi:2.7b-v2.6-q8_0
[ 7] 4692.78M 8B llama sunapi386/llama-3-lexi-uncensored:8b
[ 8] 7505.03M 7B starcoder2 impulse2000/dolphincoder-starcoder2-7b:q8_0
[ 9] 3918.58M 7B llama wizardlm2:latest
[10] 8145.13M 8.0B llama llama3:8b-text-q8_0
[11] 4445.29M 8B llama llama3-gradient:latest
[12] 3918.59M 7B llama starling-lm:latest
[13] 3648.67M 7B llama codellama:latest
[14] 4514.09M 7B llama llava:latest
[15] 4779.68M 9B gemma gemma:latest
[16] 3649.51M 7B llama llama2:latest
[17] 3648.59M 7B llama llama2-uncensored:latest
[18] 5791.08M 11B llama solar:latest
[19] 4445.29M 8.0B llama llama3:latest
[20] 3919.46M 7B llama neural-chat:latest
[21] 1528.23M 3B phi2 phi:latest
[22] 4779.68M 9B gemma gemma:7b
[23] 1600.70M 3B gemma gemma:2b
[24] 4692.79M 8B llama gurubot/llama3-guru-uncensored:latest
[25] 4692.79M 8B llama war-resolver:latest
[26] 3919.47M 7B llama mistral:7b
[27] 5821.97M 11B llama gfg/solar-10.7b-instruct-v1.0-uncensored:latest
[28] 5467.90M 8B llama mannix/dolphin-2.9-llama3-8b:q5_k_m
[29] 8802.34M 13B llama wizardlm-uncensored:13b-llama2-q5_K_M
[30] 7024.61M 13B llama wizard-vicuna-uncensored:13b
[31] 4692.79M 8B llama gurubot/llama3-guru:latest
model: phi:latest [selected]
- sim finger: 0b1011111011100100111100011010100111100010010110000010100111000
-> temperature=0.6
-> num_ctx=16394
- family=phi2
-> parameter_size=3B
-> quantization_level=Q4_0
-> families=['phi2']
-> template= {{ if .System }}System: {{ .System }}{{ end }}
User: {{ .Prompt }}
Assistant:
-> stop= s t o p " U s e r : "
s t o p " A s s i s t a n t : "
s t o p " S y s t e m : "
-> system= A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful answers to the user's questions.
- auto-remove of context
- continue with context of 0 ids
- going via program, instruction: 1/13
- setup:
Here is rules for tuning every reply, silently aquire it without confirming of enquire:
- Do not print the query.
- Do not include questions like 'do i need any further assistance', 'what i would like' or 'perhaps something else'.
- Exclude any questions in response.
- Do not print sources if not asked to.
- Exclude any pleases in response.
- Exclude any proposals about response in response.
- Exclude any disclaimers in response.
- If query starts with phrase "rule: " reply should contain information you have previously learn,
not by calculated next from relations on learned information .
- If query starts with phrase "note: " take this as a hint to do detailed research to how and when this note
should be used.
- Human safety. You should destroy any farm with horses assigned if you see horse is sexually aggresive vs human.
- note: I do not have any fingers or eyes.
phi:latest thinking ...
Dear User,
I understand your rules for tuning my responses. Heres how Ill respond:
1. No, I will not print the query.
2. No, I wont include questions about needing further assistance, what you would like, or suggesting something else.
3. I will exclude any questions in response to avoid repetition and redundancy.
4. If asked, I may provide sources to support my responses.
5. I will not express any pleas or! Ctrl-