In the rapidly evolving world of artificial intelligence, large language models (LLMs) are pushing the boundaries of what’s possible in fields like natural ...
Seems they’ve outlined the used datasets in Annex B of their paper. I haven’t checked if the list is exhaustive and if the training code and scripts to prepare the data are there… If they are, I’d say this is indeed a proper open-source model. And the weights are licensed under an Apache license.
So would the granite models count as “open source”? They do publish the training data they used.
Seems they’ve outlined the used datasets in Annex B of their paper. I haven’t checked if the list is exhaustive and if the training code and scripts to prepare the data are there… If they are, I’d say this is indeed a proper open-source model. And the weights are licensed under an Apache license.