Context: Gemma is a group of free-to-use AI models with a focus on being small. According to benchmarks this outperforms Llama 3.

  • єχтяαναgαηтєηzумє
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    3 days ago

    Only portions of the code are published while the rest is kept under wraps. Classic corporate America bs finding a loop hole to use a trendy term.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      neural network weights are just files, collections of numbers forming matrices; how is a partially open collection of weights of any use

      the weights are open

      $ docker exec -it ollama ollama show gemma:7b
        Model                              
        	arch            	gemma	             
        	parameters      	9B   	             
        	quantization    	Q4_0 	             
        	context length  	8192 	             
        	embedding length	3072 	             
        	                                  
        Parameters                         
        	stop            	"<start_of_turn>"	 
        	stop            	"<end_of_turn>"  	 
        	penalize_newline	false            	 
        	repeat_penalty  	1                	 
        	                                  
        License                            
        	Gemma Terms of Use              	  
        	Last modified: February 21, 2024	
      
      • jacksilver@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Since there is a user acceptance policy that restricts what you can do with the model that might be considered “partially” open.

        Yeah you can see the weights, but it seems you are limited on what you can do with the weights. How we’ve gotten to the point you can protect these random numbers that I’ve shared with you through a UA is beyond me.