-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Profiling #4
Comments
Okay, so it took me a while and I ran into this bug , which has the following temporary fix but I finally managed to run the profiler and honestly, I am not really sure how to work with it yet. After reading this I understand that the lack of orange and red colors is overall a good thing. I added the profile.html file. I am not sure whether I can embed it here in some form. If I understand it correctly, I spend about 70% of the time in genBeta! but I am not sure what to make of it. |
Note that the VSCode extension is probably a nicer interface to profiling, at least in my opinion |
Indeed but you do have a lot of orange/yellow, which denotes memory allocations. In particular, it seems |
Apparently you are! But in that case you don't need all the funky business with ProfileView.jl, you only need the |
Yes, that is an inverse computation, which is costly and not always needed. For instance, if you compute |
Now I am utterly confused. I am getting this graph by typing in |
Yes, I have invSig = Sigma_prior^-1
V = (invSig + sig2_d_vec[1]^(-1)*(X'*X))^-1
C = cholesky(Hermitian(V)) I can think about dealing with this. In real code (this is a teaching example), in the |
The import shouldn't be necessary. And indeed I have recently been confused by this blank screen, but if you run the line with |
No, you spent 70% in a certain line of |
I see, thanks! I removed unnecessary repeated computations, namely new V = (invSig + sig2_d_vec[1]^(-1)*Xprim)^-1
C = cholesky(Hermitian(V))
Beta1 = V*(invSBetaPr + sig2_d_vec[1]^(-1)*XprimY)
beta_d .= Beta1 .+ C.L*randn(5,1) new time: julia> @btime include("MainNewB.jl");
27.985 ms (139970 allocations: 49.22 MiB) old time:
I am unsure I can reduce this further. |
Hard to say without profiling myself, but here are some things I noticed:
When you do this, you allocate a new matrix for
You compute this twice.
This is an instance of doing
Apparently there is a way tu mutualize computations between Cholesky decomposition and matrix inversion, but I'm not well-versed in linear algebra to help https://en.wikipedia.org/wiki/Cholesky_decomposition#Matrix_inversion |
Hmm, I know of a method, where given a normal distribution with mean The source of the linke in wikipedia your provided sounds similar. This part is just after equation 24 on the second page I never thought about using those algorithms for small problems, I can surely give it a try.
If I use the above trick to not having to compute Vinv, I could also solve |
Well, that did improve! julia> @btime include("MainNewB.jl")
22.101 ms (132948 allocations: 35.15 MiB) versus julia> @btime include("MainNewB.jl")
29.781 ms (153948 allocations: 54.81 MiB) We started at 80ms, then went to 40ms, then to 30ms, now this. Here is what I changed function genBeta2!(beta_d,invSig,sig2_d_vec,Xprim,XprimY,invSBetaPr)
sig2inv = sig2_d_vec[1]^(-1)
V = invSig + sig2inv*Xprim
cholV = cholesky(Hermitian(V))
beta_d .= V\(invSBetaPr + sig2inv*XprimY) + cholV.L\randn(5,1)
# Vinv = (V)^-1
# C = cholesky(Hermitian(Vinv))
# Beta1 = Vinv*(invSBetaPr + sig2inv*XprimY)
# beta_d .= Beta1 .+ C.L*randn(5,1)
end I am using the aforementioned algorithm of Chan and Jeliazkov (2009) to draw |
See, we're getting somewhere :) Time to profile again, what's the most costly part? |
While the issues I raised for memory allocations are good training, their solutions don't seem to bring large gains.
To see where our efforts should concentrate, you will need to profile the code. Care to share the results?
The text was updated successfully, but these errors were encountered: