Current workflow for Python --> CPU/GPU/TPU conversion (including MHLO/HLO MLIR/XLA) #12617
              
                Unanswered
              
          
                  
                    
                      che-shr-cat
                    
                  
                
                  asked this question in
                General
              
            Replies: 0 comments
  
    Sign up for free
    to join this conversation on GitHub.
    Already have an account?
    Sign in to comment
  
        
    
Uh oh!
There was an error while loading. Please reload this page.
-
Hello!
Can you please help me understand what the current workflow for producing compiled code is?
As I understand, at the beginning of the year, JAX switched to using MLIR to generate MHLO instead of HLO for XLA.
Do I correctly understand that the current workflow is the following:
Python -- (by JAX) --> jaxpr -- (by JAX) --> MHLO -- (by MLIR) --> optimized MHLO -- (by JAX?) --> HLO --> (by XLA) --> optimized HLO --> (by XLA) --> native code for CPU/GPU/TPU (for CPU/GPU additionally using LLVM)
Is it correct or not? Is there any description of this process or any discussions/talks on it?
How exactly does JAX use MLIR? The documentation still talks about XLA but not MLIR.
Thank you in advance!
Beta Was this translation helpful? Give feedback.
All reactions