Giant v2 focuses on scaling and learning by optimzing even further the Transformer model and training in on synthethic data generated by another LLM. (i am sounding like chatgpt wrote this)
Giant v2 won't introduce any new architectures, focus is on better dataset preperation, cleaning and better training techniques.
I am still working on Giant v2, no release date yet. Only after it is done I can start filming part 2 of the Giant series - sorry!
Next architectural changes will be Giant v3 or Axonix.