-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Address slowness resulting from size of MODE data #67
Comments
To document the slowness of mysql queries for MODE data, I plotted our most taxing test case: a month-long reflectivity dieoff. These are the parameters: This plot requires two queries (to avoid doing a horrendous table join):
For this example on metv-gsd.gsd.esrl.noaa.gov, the combined query execution time was 6:23 (the vast majority of this was spent on the second query), and the time to execute the METexpress statistical code on the results was 2:49. |
Thank you so much for documenting this. Just to be clear, I assume the
execution time of 6:23 was 6 mins 23 secs?
…On Tue, Mar 29, 2022 at 12:32 PM Molly Smith ***@***.***> wrote:
To document the slowness of mysql queries for MODE data, I plotted our
most taxing test case: a month-long reflectivity dieoff. These are the
parameters:
[image: Screen Shot 2022-03-25 at 12 05 22 PM]
<https://user-images.githubusercontent.com/29582181/160679982-7b821b42-2f86-4226-808e-42170fd9b5ae.png>
This plot requires two queries (to avoid doing a horrendous table join):
select h.fcst_lead as fcst_lead, count(distinct
unix_timestamp(h.fcst_valid)) as N_times, min(unix_timestamp(h.fcst_valid))
as min_secs, max(unix_timestamp(h.fcst_valid)) as max_secs, avg(ld2.area)
as area, group_concat(distinct ld2.object_id, ';', h.mode_header_id, ';',
ld2.area, ';', ld2.intensity_nn, ';', ld2.centroid_lat, ';',
ld2.centroid_lon, ';', unix_timestamp(h.fcst_valid), ';', h.fcst_lev order
by unix_timestamp(h.fcst_valid), h.fcst_lev) as sub_data2 from
mv_gsl_mode_retros.mode_header h, mv_gsl_mode_retros.mode_obj_single ld2
where 1=1 and unix_timestamp(h.fcst_valid) >= '1588483800' and
unix_timestamp(h.fcst_valid) <= '1591023600' and h.model = 'HRRRv4' and
h.fcst_var = 'REFC' and h.fcst_lev IN('L0','L0=') and h.descr IN('ECONUS')
and ld2.simple_flag = 1 and h.mode_header_id = ld2.mode_header_id group by
fcst_lead order by fcst_lead;
select h.fcst_lead as fcst_lead, count(distinct
unix_timestamp(h.fcst_valid)) as N_times, min(unix_timestamp(h.fcst_valid))
as min_secs, max(unix_timestamp(h.fcst_valid)) as max_secs,
avg(ld.interest) as interest, group_concat(distinct ld.interest, ';',
ld.object_id, ';', h.mode_header_id, ';', ld.centroid_dist, ';',
unix_timestamp(h.fcst_valid), ';', h.fcst_lev order by
unix_timestamp(h.fcst_valid), h.fcst_lev) as sub_data from
mv_gsl_mode_retros.mode_header h, mv_gsl_mode_retros.mode_obj_pair ld where
1=1 and unix_timestamp(h.fcst_valid) >= '1588483800' and
unix_timestamp(h.fcst_valid) <= '1591023600' and h.model = 'HRRRv4' and
h.fcst_var = 'REFC' and h.fcst_lev IN('L0','L0=') and h.descr IN('ECONUS')
and ld.simple_flag = 1 and h.mode_header_id = ld.mode_header_id group by
fcst_lead order by fcst_lead;
For this example on metv-gsd.gsd.esrl.noaa.gov, the combined query
execution time was 6:23 (the vast majority of this was spent on the second
query), and the time to execute the METexpress statistical code on the
results was 2:49.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AG6HZOXLANBDRTUALI5K7ADVCNEEFANCNFSM5OB7SBWQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
Bonny Strong
NOAA/GSL and CIRA
office: 303 497-3936 or home: 970-669-1188
|
Yes! Both of those times are in minutes.
On Tue, Mar 29, 2022 at 11:15 PM bonnystrong ***@***.***>
wrote:
… Thank you so much for documenting this. Just to be clear, I assume the
execution time of 6:23 was 6 mins 23 secs?
On Tue, Mar 29, 2022 at 12:32 PM Molly Smith ***@***.***>
wrote:
> To document the slowness of mysql queries for MODE data, I plotted our
> most taxing test case: a month-long reflectivity dieoff. These are the
> parameters:
>
> [image: Screen Shot 2022-03-25 at 12 05 22 PM]
> <
https://user-images.githubusercontent.com/29582181/160679982-7b821b42-2f86-4226-808e-42170fd9b5ae.png
>
>
> This plot requires two queries (to avoid doing a horrendous table join):
>
> select h.fcst_lead as fcst_lead, count(distinct
> unix_timestamp(h.fcst_valid)) as N_times,
min(unix_timestamp(h.fcst_valid))
> as min_secs, max(unix_timestamp(h.fcst_valid)) as max_secs, avg(ld2.area)
> as area, group_concat(distinct ld2.object_id, ';', h.mode_header_id, ';',
> ld2.area, ';', ld2.intensity_nn, ';', ld2.centroid_lat, ';',
> ld2.centroid_lon, ';', unix_timestamp(h.fcst_valid), ';', h.fcst_lev
order
> by unix_timestamp(h.fcst_valid), h.fcst_lev) as sub_data2 from
> mv_gsl_mode_retros.mode_header h, mv_gsl_mode_retros.mode_obj_single ld2
> where 1=1 and unix_timestamp(h.fcst_valid) >= '1588483800' and
> unix_timestamp(h.fcst_valid) <= '1591023600' and h.model = 'HRRRv4' and
> h.fcst_var = 'REFC' and h.fcst_lev IN('L0','L0=') and h.descr
IN('ECONUS')
> and ld2.simple_flag = 1 and h.mode_header_id = ld2.mode_header_id group
by
> fcst_lead order by fcst_lead;
>
> select h.fcst_lead as fcst_lead, count(distinct
> unix_timestamp(h.fcst_valid)) as N_times,
min(unix_timestamp(h.fcst_valid))
> as min_secs, max(unix_timestamp(h.fcst_valid)) as max_secs,
> avg(ld.interest) as interest, group_concat(distinct ld.interest, ';',
> ld.object_id, ';', h.mode_header_id, ';', ld.centroid_dist, ';',
> unix_timestamp(h.fcst_valid), ';', h.fcst_lev order by
> unix_timestamp(h.fcst_valid), h.fcst_lev) as sub_data from
> mv_gsl_mode_retros.mode_header h, mv_gsl_mode_retros.mode_obj_pair ld
where
> 1=1 and unix_timestamp(h.fcst_valid) >= '1588483800' and
> unix_timestamp(h.fcst_valid) <= '1591023600' and h.model = 'HRRRv4' and
> h.fcst_var = 'REFC' and h.fcst_lev IN('L0','L0=') and h.descr
IN('ECONUS')
> and ld.simple_flag = 1 and h.mode_header_id = ld.mode_header_id group by
> fcst_lead order by fcst_lead;
>
> For this example on metv-gsd.gsd.esrl.noaa.gov, the combined query
> execution time was 6:23 (the vast majority of this was spent on the
second
> query), and the time to execute the METexpress statistical code on the
> results was 2:49.
>
> —
> Reply to this email directly, view it on GitHub
> <
#67 (comment)>,
> or unsubscribe
> <
https://github.com/notifications/unsubscribe-auth/AG6HZOXLANBDRTUALI5K7ADVCNEEFANCNFSM5OB7SBWQ
>
> .
> You are receiving this because you are subscribed to this thread.Message
> ID: ***@***.***>
>
--
Bonny Strong
NOAA/GSL and CIRA
office: 303 497-3936 or home: 970-669-1188
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AHBWGZJO5PCQNORA5W3XUYTVCPPQHANCNFSM5OB7SBWQ>
.
You are receiving this because you were assigned.Message ID:
***@***.***>
|
Today's mysql speed test returned a query time of 7:16, and a statistic calculation time of 2:36. |
One interesting note: while the post-query statistical analysis on localhost takes about two and a half minutes, it seems to take 5-6 times as long on mats-docker-int (I'll try to get a specific number later). Are our containers underpowered? This seems like something we should absolutely fix if we can. |
Hi Molly,
Could you elaborate a little more on this so that I could try to replicate
it on my laptop? I might also try it within different containers. I wonder
what the limiting factor is. It could be some sort of system resource, like
not enough memory or slow disk configuration, or it could be network
issues, or it could be the database.. Was it the same database both ways?
randy
…On Wed, Mar 30, 2022 at 10:54 PM Molly Smith ***@***.***> wrote:
One interesting note: while the post-query statistical analysis on
localhost takes about two and a half minutes, it seems to take 5-6 times as
long on mats-docker-int (I'll try to get a specific number later). Are our
containers underpowered? This seems like something we should absolutely fix
if we can.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPT75YFART6KGOJIMATVCUVWHANCNFSM5OB7SBWQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
Randy Pierce
|
Okay, on MATS/METexpress, if you click "Data Lineage" from the graph page, there is a field under "basis" called "data retrieval (query) time". In METexpress, this field represents the TOTAL time for TWO things to occur: 1) for the mysql database itself to return the results of the query, and 2) for python_query_util to parse the returned data into the fields expected by the MATScommon routines and calculate the statistic. Both of these are considered "query time". I've been trying this week to quantify the time that both 1) and 2) take in MET Objects. Speed results for 1): Speed results for 2): All of these times are for dieoffs, the other plot types are much faster. |
So the extra time that you are seeing on mats-docker-int is not query time
but is the time it takes to do the data processing, and the code is the
same in both cases?
randy
…On Thu, Mar 31, 2022 at 11:42 AM Molly Smith ***@***.***> wrote:
Okay, on MATS/METexpress, if you click "Data Lineage" from the graph page,
there is a field under "basis" called "data retrieval (query) time". In
METexpress, this field represents the TOTAL time for TWO things to occur:
1) for the mysql database itself to return the results of the query, and 2)
for python_query_util to parse the returned data into the fields expected
by the MATScommon routines and calculate the statistic. Both of these are
considered "query time".
I've been trying this week to quantify the time that both 1) and 2) take
in MET Objects.
Speed results for 1):
When using the query given previously with the mysql database on metv-gsd,
both localhost and the containers deployed on mats-docker-int take around
6-7 minutes to get their data back from the database if it is not cached,
and around 1 minute if it is cached. On localhost I can capture this by
putting print statements in python_query_util, while on mats-docker-int I
have to have a terminal window open to metv-gsd and spam show processlist;
to see how long the query takes. However, it seems to be about the same for
both.
Speed results for 2):
On localhost, once the data comes back from the database, it takes about 2
and a half minutes to run through all of Jeff Duda's statistical routines
and exit python_query_util without any errors. On mats-docker-int, it takes
8-10 minutes, in the absence of caching. This is a substantial difference
and leads me to believe that the containers don't have the resources they
need. The speed for this component is determined by taking the "data
retrieval (query) time" from the Data Lineage and subtracting the time
needed for the query itself.
All of these times are for dieoffs, the other plot types are much faster.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPQLFVLHPPU5QR7BIJTVCXPY5ANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
Yes, but I want to be very specific that it is data processing in the query routines, and as such MATS/METexpress considers it to be "query time". I have not looked at the rest of the data processing, such as diff curves, setting plot options, etc. The query code is the same in both cases but it takes several times as long when run in a container on our MATS servers. |
I am not surprised if number crunching is faster on your laptop than on the
VM. You have a very fast laptop with a solid state disk. The VM has a
network mounted disk and has to share its processors with other containers.
It certainly is possible that the VM. could be running out of memory and we
can check that with "top". We could also run a container on the laptop
and measure the processing that way. We can also run (probably more like
should) run them on the rancher server to see how that platform measures up.
To start with can we run the query on mats-doker-int and watch top? We want
to watch for the load, cpu, and memory fields.
randy
…On Thu, Mar 31, 2022 at 12:57 PM Molly Smith ***@***.***> wrote:
Yes, but I want to be very specific that it is data processing *in the
query routines*, and as such MATS/METexpress considers it to be "query
time". I have not looked at the rest of the data processing, such as diff
curves, setting plot options, etc. The query code is the same in both cases
but it takes several times as long when run in a container on our MATS
servers.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPSH7VPPRKDJT4KRTADVCXYSZANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
Sure, I can check that. Is there any way to beef up the VMs then? They are substantially slower. |
It depends on why they are slower. If they are running out of memory and
that is causing disk swapping then certainly we can fix that. If somehow
python is having to use the disk for these calculations then probably not.
If the load is too high because somehow we are running too many processes
on the VM then we can help that too.
Perhaps when you want to run the query I could also be watching top?
randy
…On Thu, Mar 31, 2022 at 1:06 PM Molly Smith ***@***.***> wrote:
Sure, I can check that. Is there any way to beef up the VMs then? They are
*substantially* slower.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPQK7CR756N2TZZGWJ3VCXZUFANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
If you want to hop on now I'll do it. mats-docker-int. |
I'm there now
…On Thu, Mar 31, 2022 at 1:12 PM Molly Smith ***@***.***> wrote:
If you want to hop on now I'll do it. mats-docker-int.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPUWBFMW6TSLX6AAOZTVCX2H7ANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
Okay. They query should be cached so it'll take about a minute, and then the query routines should take longer. Plotting now. |
Nope, MATS had cached that plot itself. One moment. |
Right, plotting now. |
Database is working on query. |
Still on mysql component. |
I'll let you know when it leave the processlist and we move to phase 2) |
Phase 2! |
CPU is at 100% and Memory is 10-15 % |
www-data is maxing out the cpu
…On Thu, Mar 31, 2022 at 1:17 PM Molly Smith ***@***.***> wrote:
Still on mysql component.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPTMSAEUKORTXICDB63VCX22JANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
Yes |
If you use git pull to update the mats-settings directory and use that
directory for the settings then it should be the correct database.
randy
…On Wed, Apr 6, 2022 at 1:58 PM Ian McGinnis ***@***.***> wrote:
I'm sure you're both on top of this. However, I figure I should mention
that we should make sure we're using the same database for the comparison
tests. 😅
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPXPKIALO6H5XNMUGCTVDXUHTANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
I don't think it totally matters, because the difference between databases is on the order of minutes, while the localhost vs container discrepancy we're trying to test is on the order of hours, but I'm using metv-gsd, same as we had for mats-docker-int the other day. I believe that the localhost settings currently have model-vxtest. |
This is what I am getting for the default dieoff plot with the variable
REFC ....
"data retrieval (query) time": {"begin": "2022-04-06T19:52:49+00:00","finish
": "2022-04-06T19:58:50+00:00","duration": "361.511 seconds","recordCount":
1},"total retrieval and processing time for curve set": {"begin":
"2022-04-06T19:52:49+00:00","finish": "2022-04-06T19:58:51+00:00","duration
": "361.946 seconds"},"post data retrieval (query) process time": {"begin":
"2022-04-06T19:58:50+00:00","finish": "2022-04-06T19:58:51+00:00","duration
": "0.418 seconds"}
…On Wed, Apr 6, 2022 at 1:55 PM Molly Smith ***@***.***> wrote:
It's even worse if you change the variable to REFC, because that has 3
times the forecast leads. And never mind, actually, each container should
cache separately, so there shouldn't be interference. Plot away!
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPUTMY2HKGWQBYTQAMDVDXT3ZANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
FYI, the slow thing we're trying to test (getting the statistics calculated and the data formatted properly) is included in "data retrieval (query) time". It's not considered post-query because it's performed by the query routines. |
I have this in my settings. Is this correct?
"databases": [
{
"role": "sums_data",
"status": "active",
"host": "
metviewer-dev-2-cluster.cluster-c0bl5kb6fffo.us-east-1.rds.amazonaws.com",
"port": "3306",
"user": "metexpress",
"password": "...!",
"database": "metexpress_metadata",
"connectionLimit": 4
}
],
…On Wed, Apr 6, 2022 at 2:04 PM Molly Smith ***@***.***> wrote:
FYI, the slow thing we're trying to test (getting the statistics
calculated and the data formatted properly) is included in "data retrieval
(query) time". It's not considered post-query because it's performed by the
query routines.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPU5BG5OPOGYOYYFQQDVDXU3VANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
That's the AWS machine! Where did that come from?? |
The fastest database of them all. |
Okay, I've made two dieoffs at this point, and while they did take slightly longer than using meteor run for localhost:3000, they were nowhere near the hours we saw on mats-docker-int. |
From git. I did a pull and that is what I got. I am not showing any
differences if I do git diff.
randy
…On Wed, Apr 6, 2022 at 2:07 PM Molly Smith ***@***.***> wrote:
That's the AWS machine! Where did that come from??
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPVQHCTFJPUEO2Q4N2DVDXVIZANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
It was like an extra minute for each compared to localhost. Totally acceptable. |
What is the exact path you are using for your settings file? |
Which implies perhaps that it is the docker installation on mats-docker-int?
randy
…On Wed, Apr 6, 2022 at 2:09 PM Molly Smith ***@***.***> wrote:
Okay, I've made two dieoffs at this point, and while they did take
slightly longer than using meteor run for localhost:3000, they were nowhere
near the hours we saw on mats-docker-int.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPU7GCM262HGBV4GRMTVDXVODANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
Yes, it definitely does. Not just mats-docker-int, though, all of the mats-docker-* |
pierce-lt:~ randy.pierce$ cat docker-compose.yml
version: "3.9"
services:
cb-ceiling:
image: ghcr.io/dtcenter/metexpress/development/met-object:development
ports:
- "3001:9000"
links:
- mongo
volumes:
-
/Users/randy.pierce/mats-settings/configurations/dev/settings/met-object/settings.json
:/usr/app/settings/met-object/settings.json:ro
mongo:
image: mongo
pierce-lt:~ randy.pierce$
…On Wed, Apr 6, 2022 at 2:10 PM Molly Smith ***@***.***> wrote:
What is the exact path you are using for your settings file?
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPXD3QSF2CRX3465VJLVDXVSBANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
At this point we have to try this on the GSL rancher.
randy
…On Wed, Apr 6, 2022 at 2:12 PM Molly Smith ***@***.***> wrote:
Yes, it definitely does. Not just mats-docker-int, though, all of the
mats-docker-*
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPQO7BUQ3S2GSVPNZHLVDXV2DANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
Can you do a git fetch on the mats-settings repo and see if anything changes? That is not what I pushed last week. |
And indeed about rancher! |
I had already done a git pull and I did a fetch and pull again and there
aren't any differences.
randy
…On Wed, Apr 6, 2022 at 2:14 PM Molly Smith ***@***.***> wrote:
And indeed about rancher!
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPXVY3UJ3ODLRIMSBBDVDXWCNANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
What branch are you on? This is bizarre. |
I am on the development branch.
randy
…On Wed, Apr 6, 2022 at 2:14 PM Molly Smith ***@***.***> wrote:
And indeed about rancher!
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPXVY3UJ3ODLRIMSBBDVDXWCNANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
development
…On Wed, Apr 6, 2022 at 2:16 PM Molly Smith ***@***.***> wrote:
What branch are you on? This is bizarre.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPSBAE7XURQ5MEOLHKDVDXWKFANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
You'll have to screen share at the meeting and show me. I am quite confused. Is there some sort of weird symbolic link somewhere? |
perhaps, I'll show you at the meeting. I can clone a new one, I suppose.
randy
…On Wed, Apr 6, 2022 at 2:18 PM Molly Smith ***@***.***> wrote:
You'll have to screen share at the meeting and show me. I am quite
confused. Is there some sort of weird symbolic link somewhere?
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPWY7KKTHGIL47GPJ2TVDXWRJANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
I did a new clone and it still shows this (I blanked out the password)...
{
"private": {
"databases": [
{
"role": "sums_data",
"status": "active",
"host": "
metviewer-dev-2-cluster.cluster-c0bl5kb6fffo.us-east-1.rds.amazonaws.com",
"port": "3306",
"user": "metexpress",
"password": "...!",
"database": "metexpress_metadata",
"connectionLimit": 4
}
],
…On Wed, Apr 6, 2022 at 2:18 PM Molly Smith ***@***.***> wrote:
You'll have to screen share at the meeting and show me. I am quite
confused. Is there some sort of weird symbolic link somewhere?
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPWY7KKTHGIL47GPJ2TVDXWRJANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
Today in our meeting, Randy, Ian, and I ran the same met-object container on Docker Desktop, mats-docker-int, and model-vxtest. On both Docker-Desktop and model-vxtest, which are not VMs, the met-object container returned our sample MODE dieoff plot in 7-8 minutes, on par with localhost's 6-7 minutes. This means that there is nothing wrong with the container itself, and alpine linux is not the culprit. However, on mats-docker-int, which is a VM, the met-object container ran for an hour and still had not returned a plot. This implies that there is something about the VM itself that is causing our slowness. |
In addition, we carefully monitored memory usage, load, and cpu usage for
all the machines. These numbers were always similar. The cpu usage does go
very high (100%) for each python number crunching process, but it does that
for every machine that we tested. The difference appears to be that 100% of
that virtual machine is nowhere near 100% of any of the other machines. We
probably need someone from IT to help us understand what is happening to
the VM when those python processes run. We also took the opportunity to
verify that if two or more clients ask for one of those long running plots
they each get their own python process that runs on a different core, which
is as designed, but it was nice to see that happening.
…On Wed, Apr 6, 2022 at 3:51 PM Molly Smith ***@***.***> wrote:
Today in our meeting, Randy, Ian, and I ran the same met-object container
on Docker Desktop, mats-docker-int, and model-vxtest. On both
Docker-Desktop and model-vxtest, which are not VMs, the met-object
container returned our sample MODE dieoff plot in 7-8 minutes, on par with
localhost's 6-7 minutes. This means that there is nothing wrong with the
container itself, and alpine linux is not the culprit. However, on
mats-docker-int, which is a VM, the met-object container ran for an hour
and still had not returned a plot. This implies that there is something
about the VM itself that is causing our slowness.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPUEILJ3T3I6UTTCY23VDYBNHANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
You can just copy the one from mats-doker-int to get started. You need to
set up the settings directory to match it or modify the compose file to
point to your settings.
…On Thu, Mar 31, 2022 at 2:02 PM Molly Smith ***@***.***> wrote:
Whatever works. I can see if I can put together a docker-compose.yml in
the meantime.
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPXDN64AMKHE7YT3RUTVCYAEJANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
check out this link too...
https://news.ycombinator.com/item?id=27379482
…On Thu, Mar 31, 2022 at 2:15 PM Molly Smith ***@***.***> wrote:
Heh, apologies to anyone following this repository!
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPS5UNED7KISRP4Y5ODVCYBWTANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
no, not at all. Just docker desktop.
…On Thu, Mar 31, 2022 at 1:55 PM Molly Smith ***@***.***> wrote:
This means I have to finally install rancher desktop, doesn't it :)
—
Reply to this email directly, view it on GitHub
<#67 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AGDVQPTYIPYJ7HHY7DSF55TVCX7L7ANCNFSM5OB7SBWQ>
.
You are receiving this because you commented.Message ID:
***@***.***>
--
Randy Pierce
|
Needs to be addressed by changing MySQL schema. |
The MODE output from MET is quite large, and may cause problems if we run it in realtime and try to store all of it in one database. We need to discuss solutions for what to do about this.
The text was updated successfully, but these errors were encountered: