Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simulator improvements #74

Open
shankari opened this issue Oct 15, 2024 · 138 comments
Open

Simulator improvements #74

shankari opened this issue Oct 15, 2024 · 138 comments

Comments

@shankari
Copy link
Collaborator

  • roll forward to master
  • ensure that we can see the energy fluctuations
  • show ISO + OCPP messages in the UI (either on the side or in the debug screen)
@the-bay-kay
Copy link

the-bay-kay commented Oct 21, 2024

Noticed a regression in the Node-RED behavior when updating to the latest version of the flows:

  • When choosing a simulation method, there are a set of commands used to start, stop, pause, and resume the commands (see this thread for some basic context...)
    image

  • Within the current version of EVerest's ISO15118-2+OCPP2.0.1 Demo, we only have commands to start and stop the simulation under the "Plug & Charge" method: if we attempt to pause, we receive the following error, and crash:
    image

The command sequence has changed enough between versions that we cannot just copy-and-paste the old sequence, so let's splice in our modified dt/eamount cmd and add back in the pause / resume functionality...

EDIT: Well, splicing in the commands was easy enough! However, the pause/resume functionality is not the same as the non-P&C. We pause the session OK, but when attempting to resume, run into this certificate error...

image

In the interest of time (we want the demo rolled forward ASAP), I'll leave this a known regression and we can inspect after the dust has settled.

@the-bay-kay
Copy link

the-bay-kay commented Oct 21, 2024

Let's make a checklist of areas I need to update for the roll-forward:

  • Nodered:
    • Update flows to include changes up to Renegotiation *
      • *Fix the regression described above
    • Update Dockerfile to be at parity with renegotiation demo
  • Manager
    • Modify the Dockerfile to build off of 2024.9.0-rc2
      • Fix Database Incompatibilities Described Below
  • Once updated & running:
    • Test DepartureTime & EAmount alone
      • If working, cool! If not, will need to make some updates
    • Test Grid Clamps
      • ditto
    • Add Powercurve generation module
    • Modify simulator.py to use the powercurve generation
  • If necessary, update the following modules:
    • Compiled Modules
      • Update EvseManager.cpp, OCPP201.cpp
        • Double check there are no other files to be changed on the C++ side of thing (e.g., do we adjust iso_server.cpp?)
    • Simulator:
      • Update PyJosEv's /evcc/ files

@shankari
Copy link
Collaborator Author

shankari commented Oct 21, 2024

I think this is the best approach -- since I'm unsure which release the main of everest-demo is on, we want to ensure we're building off of the September Release. Please correct me if there is a better way to live update everest-core while working on the demo! (e.g., should we just patch after pulling?)

@the-bay-kay I have been patching after pulling - e.g.
https://github.com/US-JOET/everest-demo/blob/879548aafdf9ff2366de45f1b77cd25d5667ebb2/demo-iso15118-2-ac-plus-ocpp.sh#L354

I would start with that, and switch to having manager/Dockerfile build off using fork if the number of patches gets too large.

@the-bay-kay
Copy link

Switching to 2024.9.0-rc2, we encounter the following config error...
image

It seems Shankari ran into a similar issue on this previous issue when building on the uMWC - presumably, I'll need to copy over the config files from the last known working demo (e.g., 2024.3.0) -- I need to run for some evening appointments, but will investigate further after...

@shankari
Copy link
Collaborator Author

presumably, I'll need to copy over the config files from the last known working demo (e.g., 2024.3.0)

That is almost certainly not the correct approach to take. I am not sure where you are getting the config from (you haven't indicated what you did to accomplish "switching to 2024.9.0-rc2"). However, if there is a mismatch in modules, it is almost certainly due to an old config, referring to an old module, being copied over, and the module being renamed in the current release.

In that case, the old config is the problem and copying it over won't fix anything

@the-bay-kay
Copy link

...you haven't indicated what you did accomplish "switching to 2024.9.0-rc2"

Apologies for the lack of clarity -- I've switched the version in manager/Dockerfile like I described in the checklist above (so we pull & build 2024.9.0-rc2). The config conflict makes sense. Let me read more closely through the dockerfile and corresponding config.jsons, to see if I missed any versions there. Likewise, since the error message explicitly mentions the manifest.yaml of JsCarSimulator is missing, I'll see what has changed in that module between 2024.3.0 and 2024.9.0-rc2

@the-bay-kay
Copy link

the-bay-kay commented Oct 22, 2024

Let's trace back and see where this fails:

  • We build using demo-iso15118-2-ac-plus-ocpp.sh, modified to point to manager/Dockerfile: this builds without error
  • Within Docker Desktop's exec, we run sh /ext/source/build/run-scripts/run-sil-ocpp201-pnc.sh, which then fails with the error described above
  • Looking at the contents of run-sil-ocpp201-pnc.sh...
    LD_LIBRARY_PATH=/ext/source/build/dist/lib:$LD_LIBRARY_PATH \
    PATH=/ext/source/build/dist/bin:$PATH \
    manager \
      --prefix /ext/source/build/dist \
      -- conf /ext/source/config/config-sil-ocpp201-pnc.yaml \
      \
      $@

We launch using the config /ext/source/config/config-sil-ocpp201-pnc.yaml. So, looking at that...
image

Aha. If these release notes are to be believed, it seems that JsCarSimulator was replaced with JsEvManager. We're copying over an old config from everest-demo, that does not match the one in core linked above. So, let's copy over the updated configs...

@the-bay-kay
Copy link

the-bay-kay commented Oct 22, 2024

Success -- updating the config got us past the manifest loading. It seems we have to update the OCPP database file as well -- we reach the following failstate:

Fail to Boot: Database out of date
2024-10-22 16:53:27.114732 [INFO] ocpp:OCPP201     :: Established connection to database: "/ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db"
2024-10-22 16:53:27.115098 [ERRO] ocpp:OCPP201    void ocpp::v201::InitDeviceModelDb::execute_init_sql(bool) :: Database does not support migrations yet, please update the database.
terminate called after throwing an instance of 'boost::wrapexcept<boost::exception_detail::error_info_injector<ocpp::v201::InitDeviceModelDbError> >'
  what():  Database does not support migrations yet, please update the database.

We create device_model_storage.db when we run...

COPY device_model_storage_maeve_sp1.db ./dist/share/everest/modules/OCPP201/device_model_storage.db

... So let's look for an updated database file and slot that in. At a glance, the only file with the same name as device_model_storage_maeve_sp1.db I can find in the EVerest organization is the one described above in the demo repository -- so, let's widen our search to (i) the MaEVe repo, and (ii) other .db files.

@the-bay-kay
Copy link

the-bay-kay commented Oct 22, 2024

With no obvious lead as for a replacement db file, let's trace back:

  • This error was initially added in this commit, for PR 681.
  • Looking at the EVerest Zulip, it seems I'm not the first person to be confused by the new OCPP database system. That thread points me...
  • Here, to the documentation for device initialization via the new configurations.

If I understand the documentation correctly, the device_model_storage_maeve_sp1.db is a custom database file that is not up to date. So, two questions:

  • Is there an updated database file that I should be using that I have not found yet?
  • If not, how should I go about creating a new version of this file?

@shankari , if you have any insight on working with the updated database schema, that would be greatly appreciated : )

I'll continue to read through the docs to see if I missed anything, and get a better understanding of the initialization process.

EDIT: Under the "to-do" for OCPP 2.0.1 integration, it says to use the provided database file... but I haven't been able to find this...

@shankari
Copy link
Collaborator Author

@the-bay-kay https://github.com/EVerest/everest-demo/issues?q=is%3Aissue+sqlite+is%3Aclosed

@the-bay-kay
Copy link

the-bay-kay commented Oct 22, 2024

My database experience has been limited almost entirely to MongoDB and NoSQL systems, so I am approaching this with a beginner's eye. With that said, let's learn along the way:

  • Our issue is that the database does not support migration. Database migration was added relatively recently to EVerest, and allows users to update their DB's without destroying existing data.

  • Question: How do we enable migration?

    • Well, reading through the docs linked above, they say...

    Old databases need to be removed so a new database can be created using the migrations. This is to make sure that there is exact control over the schema of the database and no remains are present.

    Let's put a pin in that for now.

  • The error is thrown on this line, when get_user_version() == 0. We can confirm this is the case for our database file by running...

    sqlite> PRAGMA user_version;
    0
  • Going back to the docs, we can learn more about the purpose of the user version:

    ...The version numbers of the files will be used together with the database's user_version field to determine which migrations files to run to get to the target version.

Because our database was created before the new initialization process, it will not follow the same "verion history" described within the design considerations. It seems the only way to move forward is to (as mentioned above) create a brand new database, using the new schema and internal database data (e.g., user_version >= 1). Is there really no way we can retrofit an old database?? I'll look into creating a new database with the init file now...


EDIT: Let's read through this... This seems to detail the process of updating components using a custom .json file. This seems similar to what we are doing now, but not identical (e.g., component config should be in component_config/custom, which we do not touch). I think this is good to know about, but not the "update" process we need.

@shankari
Copy link
Collaborator Author

Please see the way in which I created this database earlier

@the-bay-kay
Copy link

the-bay-kay commented Oct 22, 2024

Please see the way in which I created this database earlier

To confirm, I do need to re-create the MaEVe database? Do you have a specific thread I can reference? Looking at the commit history of everest-demo, the method for creating OCPP 201 Device Model Databases has changed since the maeve files were updated 7 months ago (we now use a build target instead of the python script). I wasn't able to find a specific build process in the issues you linked above, but I may have missed something -- I'll read through again. Looking at this PR, I can see that the citrine demos were added <4 months ago... Let me know if I'm missing something, otherwise I'll try re-creating the databases to support migration.

@shankari
Copy link
Collaborator Author

This is not the MaEVe database - it is the EVerest database.
MaEVe is the CSMS layer and should not affect the running of the station software.
I put in the link to find the issues where I used sqlite to edit the database file.

You should follow those instructions to copy and edit the file properly.

@the-bay-kay
Copy link

the-bay-kay commented Oct 23, 2024

This is not the MaEVe database - it is the EVerest database.

Right -- but the file we copy over is called device_model_storage_maeve_sp1.db. Referring to it as "maeve database" may have been a confusing shorthand on my part.

I put in the link to find the issues where I used sqlite to edit the database file.

If you're referring to this thread, resetting the host as described did not fix the issue. Nor is it an issue of an incorrect url. Around release 2024.7.0, the database was substantially changed. Relevant to our work is the adoption of a migration stratedgy, which relies on table parameters our database does not have (hence the "user_version()" error we receive: the solution isn't as simple as bumping this value up to 1).

All of the demo-repository's databases were created roughly in release 2024.3.0 (e.g., the version the demo is based off of in manager/Dockerfile): any database created before the refactor is not compatible with the current schema. Please see this Zulip thread where I got more clarification on this from the Everest Cloud Communication group. The community suggests re-creating the database to ensure that it is compatible with the latest schema.

@shankari
Copy link
Collaborator Author

@the-bay-kay you are working on upgrading the SIL. So the steps to take the SIL and run it on the uMWC are not relevant to you. That's why I suggested that you search for sqlite in general. You could also see when the lines to add the custom DB were added to the Dockerfile.

It was added in #19
which referred to #25
which shows the edits to the sqlite database, including copying it back and forth

The community suggests re-creating the database to ensure that it is compatible with the latest schema.

I don't disagree. The point I am trying to make is that the database checked into the codebase was created by editing the database that was created by EVerest automatically at startup.

After the correctly formatted database is created automatically at startup

You should follow those instructions to copy and edit the file properly.

@the-bay-kay
Copy link

So, in order to re-create the database, we need to build EVerest without inserting a custom database. So, let's comment out the copies:

File Changes

manager/Dockerfile

# Copy over the custom config *after* compilation and installation
# COPY config-docker.json ./dist/share/everest/modules/OCPP/config-docker.json
# COPY config.json ./dist/share/everest/modules/OCPP201/config.json
# COPY device_model_storage_maeve_sp1.db ./dist/share/everest/modules/OCPP201/device_model_storage.db

COPY run-test.sh /ext/source/tests/run-test.sh

demo-iso15118-2-ac-plus-ocpp.sh

  elif [[ "$DEMO_VERSION" =~ sp3 ]]; then
    echo "Copying device DB, configured to SecurityProfile: 3"
    # docker cp manager/device_model_storage_maeve_sp3.db \
    #   everest-ac-demo-manager-1:/ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db
  fi
When we run like this, we're successfully generating the database!

image

...but, as expected, fail to connect.

image

Let's compare the old file and the new template, and get this one up to speed...

@the-bay-kay
Copy link

the-bay-kay commented Oct 24, 2024

As described above, the current plan is to take the .db file initialized by OCPP201, and modify that to make sure we connect properly to maeve. When we run a fresh install of EVerest, we reach the following connection error...

2024-10-24 03:43:04.345324 [ERRO] ocpp:OCPP201    void ocpp::WebsocketTlsTPM::on_conn_fail() :: OCPP client connection to server failed
2024-10-24 03:43:04.345337 [INFO] ocpp:OCPP201     :: Connect failed with state: 3 Timeouted: false
2024-10-24 03:43:04.345421 [INFO] ocpp:OCPP201     :: Reconnecting in: 3000ms, attempt: 1
2024-10-24 03:43:04.504193 [INFO] ocpp:OCPP201     :: Security Event in OCPP occured: StartupOfTheDevice
2024-10-24 03:43:07.347740 [INFO] ocpp:OCPP201     :: Connecting to uri: ws://localhost:9000/cp001 with security-profile 1
2024-10-24 03:43:07.348077 [INFO] ocpp:OCPP201     :: Using network iface: 
2024-10-24 03:43:07.380724 [INFO] ocpp:OCPP201     :: LWS connect with info port: [9000] address: [localhost] path: [/cp001] protocol: [ocpp2.0.1]
2024-10-24 03:43:07.381002 [ERRO] ocpp:OCPP201    int ocpp::WebsocketTlsTPM::process_callback(void*, int, void*, void*, size_t) :: CLIENT_CONNECTION_ERROR: conn fail: 111

Turns out the URL work mentioned above was relevant -- consider my hat eaten! So, looking at the newly generated database file...

sqlite3 /ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db "SELECT * FROM VARIABLE_ATTRIBUTE" | more
...
130|128|2|1|0|0|default|[{"configurationSlot": 1, "connectionData": {"messageTimeout": 30, "ocppCsmsUrl": "ws://localhost:9000", "ocppInterface": "Wired0", "ocppTransport": "JSON", "ocppVersion": "OCPP20", "securityProfile": 1}}]

... We see that we are pointing to the wrong URL. So, let's update this with the following method, and confirm it was set correctly...

UPDATE VARIABLE_ATTRIBUTE
  SET "VALUE" = '[{"configurationSlot":1,"connectionData":{"messageTimeout":30,"ocppCsmsUrl":"ws://host.docker.internal/ws/cp001","ocppInterface":"Wired0","ocppTransport":"JSON","ocppVersion":"OCPP20","securityProfile":1}}]'
  WHERE id=130;

Let's confirm we've made the change...

sqlite3 /ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db "SELECT * FROM VARIABLE_ATTRIBUTE WHERE id=130" 
...
130|128|2|1|0|0|default|[{"configurationSlot":1,"connectionData":{"messageTimeout":30,"ocppCsmsUrl":"wss://host.docker.internal/ws/cp001","ocppInterface":"Wired0","ocppTransport":"JSON","ocppVersion":"OCPP20","securityProfile":1}}]

Cool! So this should act as a good jumping off place. Using the database file is more complicated than simply running with these changes (e.g., this is overwritten upon a new initialization, since the "custom database" spot is empty). So, when I've got a fresh set of eyes tomorrow morning, let's take a look at how to use this new modified db as a custom file...

@shankari
Copy link
Collaborator Author

shankari commented Oct 24, 2024

Using the database file is more complicated than simply running with these changes (e.g., this is overwritten upon a new initialization, since the "custom database" spot is empty).

I don't think that it is this complicated. If it were overwritten, then the custom DB that I created back in May and that was copied in would have been overwritten. Since it was not, copying over this DB in the same way should ensure that it is not overwritten, and hopefully that EVerest will be able to start up properly

@the-bay-kay
Copy link

Since it was not, copying over this DB in the same way should ensure that it is not overwritten, and hopefully that EVerest will be able to start up properly

I believe you are right, that it should ultimately be as simple as copying over the DB in the same way -- upon doing so, however, we receive the following error...

2024-10-24 14:27:16.437431 [INFO] ocpp:OCPP201     :: Target version: 1, current version: 1
2024-10-24 14:27:16.437499 [INFO] ocpp:OCPP201     :: No migrations to apply since versions match
2024-10-24 14:27:16.437826 [ERRO] ocpp:OCPP201    ocpp::common::SQLiteStatement::SQLiteStatement(sqlite3*, const std::string&) :: no such column: va.VALUE_SOURCE
terminate called after throwing an instance of 'ocpp::v201::InitDeviceModelDbError'
  what():  Could not create statement SELECT c.ID, c.NAME, c.INSTANCE, c.EVSE_ID, c.CONNECTOR_ID, v.ID, v.NAME, v.INSTANCE, v.REQUIRED, vc.ID, vc.DATATYPE_ID, vc.MAX_LIMIT, vc.MIN_LIMIT, vc.SUPPORTS_MONITORING, vc.UNIT, vc.VALUES_LIST, va.ID, va.MUTABILITY_ID, va.PERSISTENT, va.CONSTANT, va.TYPE_ID, va.VALUE, va.VALUE_SOURCE FROM COMPONENT c JOIN VARIABLE v ON v.COMPONENT_ID = c.ID JOIN VARIABLE_CHARACTERISTICS vc ON vc.VARIABLE_ID = v.ID JOIN VARIABLE_ATTRIBUTE va ON va.VARIABLE_ID = v.ID

I think we need to build off of the database file with User Version 5, not uv=1 -- I expected the migration process to occur prior to any runtime events, but it seems we stay at uv1 and then run into a schema mismatch (I believe this is the error above)... Let me see if there is an alternate DB file I missed and attempt to copy that over.

@the-bay-kay
Copy link

So, when we start up the simulator, we receive the following console info concerning the database:

Startup Logs...
2024-10-24 16:17:51.050503 [INFO] ocpp:OCPP201     :: Established connection to database: "/ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db"
2024-10-24 16:17:51.050713 [INFO] ocpp:OCPP201     :: Target version: 1, current version: 1
2024-10-24 16:17:51.050805 [INFO] ocpp:OCPP201     :: No migrations to apply since versions match
2024-10-24 16:17:51.074783 [INFO] evse_manager_2:  :: Ignoring BSP Event, BSP is not enabled yet.
2024-10-24 16:17:51.076880 [INFO] evse_manager_2:  :: Cleaning up any other transaction on start up
2024-10-24 16:17:51.087960 [INFO] ocpp:OCPP201     :: Successfully closed database: "/ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db"
2024-10-24 16:17:51.088160 [INFO] ocpp:OCPP201     :: Established connection to database: "/ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db"
2024-10-24 16:17:51.088214 [INFO] ocpp:OCPP201     :: Established connection to device model database successfully: "/ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db"
2024-10-24 16:17:51.098455 [INFO] ocpp:OCPP201     :: Successfully retrieved Device Model from DeviceModelStorage
2024-10-24 16:17:51.105581 [INFO] ocpp:OCPP201     :: Established connection to database: "/tmp/ocpp201/cp.db"
2024-10-24 16:17:51.105708 [INFO] ocpp:OCPP201     :: Target version: 5, current version: 5
2024-10-24 16:17:51.105759 [INFO] ocpp:OCPP201     :: No migrations to apply since versions match
2024-10-24 16:17:51.105850 [INFO] ocpp:OCPP201     :: Successfully closed database: "/tmp/ocpp201/cp.db"
2024-10-24 16:17:51.105972 [INFO] ocpp:OCPP201     :: Established connection to database: "/tmp/ocpp201/cp.db"
2024-10-24 16:17:51.109135 [INFO] evse_manager_1:  :: Ignoring BSP Event, BSP is not enabled yet.
...
# Additional startup logs in between...
...
2024-10-24 16:17:52.718328 [INFO] ocpp:OCPP201     :: Connecting to uri: ws://localhost:9000/cp001 with security-profile 1
2024-10-24 16:17:52.718737 [INFO] ocpp:OCPP201     :: Using network iface: 
2024-10-24 16:17:52.752215 [INFO] ocpp:OCPP201     :: LWS connect with info port: [9000] address: [localhost] path: [/cp001] protocol: [ocpp2.0.1]
2024-10-24 16:17:52.752492 [ERRO] ocpp:OCPP201    int ocpp::WebsocketTlsTPM::process_callback(void*, int, void*, void*, size_t) :: CLIENT_CONNECTION_ERROR: conn fail: 111
2024-10-24 16:17:52.752571 [ERRO] ocpp:OCPP201    void ocpp::WebsocketTlsTPM::on_conn_fail() :: OCPP client connection to server failed
2024-10-24 16:17:52.752667 [INFO] ocpp:OCPP201     :: Reconnecting in: 3000ms, attempt: 1

Originally, my plan was to modify ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db -- this was chosen because it was where we originally copied our custom database in manager/Dockerfile. As Shankari suggested, it would be better to work off the final (most recent) database version. Looking at the logs, this appears to be /tmp/ocpp201/cp.db.

Looking for the URL info as above...

 $ sqlite3 /tmp/ocpp201/cp.db "SELECT * FROM VARIABLE_ATTRIBUTE" | more
Error: in prepare, no such table: VARIABLE_ATTRIBUTE
$ sqlite /tmp/ocpp201/cp.db "PRAGMA table_list"
Full table list
/workspace # sqlite3 /tmp/ocpp201/cp.db "PRAGMA table_list"
main|CHARGING_PROFILES|table|5|0|0
main|METER_VALUE_ITEMS|table|13|0|0
main|METER_VALUES|table|5|0|0
main|sqlite_schema|table|5|0|0
main|TRANSACTIONS|table|7|0|0
main|NORMAL_QUEUE|table|5|0|0
main|AUTH_CACHE|table|4|0|0
main|AVAILABILITY|table|3|0|0
main|TRANSACTION_QUEUE|table|5|0|0
main|AUTH_LIST_VERSION|table|2|0|0
main|LOCATION_ENUM|table|2|0|0
main|AUTH_LIST|table|2|0|0
main|READING_CONTEXT_ENUM|table|2|0|0
main|MEASURAND_ENUM|table|2|0|0
main|PHASE_ENUM|table|2|0|0
temp|sqlite_temp_schema|table|5|0|0

It seems that the URL is no longer stored in the VARIABLE ATTRIBUTE table... So, let's do some digging and figure out where it could be

@shankari
Copy link
Collaborator Author

I don't think that the db in /tmp is the correct one. Generally, important information is not stored in /tmp.
I would suggest looking at where the device_model_storage.db was originally accessed (by looking at the image where that was correct) and then look at the commit history of the file on GitHub to see where it was moved

@shankari
Copy link
Collaborator Author

I think that device_model_storage.db should still be in the same location - it is being read from /ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db and is the expected version Target version: 1, current version: 1 and No migrations to apply since versions match. I still don't understand why you think that the version of that file is incorrect

@the-bay-kay
Copy link

the-bay-kay commented Oct 24, 2024

Looking at the manifest.yaml of the OCPP201 module in everest-core:2024.3.0, it seems the path to the device_model has remained consistent. That is, in the three following cases:

  • 2024.3.0 's /everest-core/modules/OCPP201/manifest.yaml
  • 2024.9.0 's /everest-core/modules/OCPP201/manifest.yaml (link)
  • 2024.0.3 's /everest-demo/manager/Dockerfile, when copying over the custom database (link)

All three of these find the default database in /ext/source/build/dist/share/everest/modules/OCPP201/device_model_storage.db`.

I think that device_model_storage.db should still be in the same location

So this is correct -- that was my understanding as well. However, when we modify this file as described here, the file is immediately overwritten. Video of this occurring below:

Editing database in Docker...
Trimmed.Database.Edit.mp4

If instead of keeping this file in place, we remove it and re-spin up the docker container, we receive a different behavior, resulting in the error described here.

Importing .db file...
compressed_dbimport.mp4

So, I want to figure out why this behavior isn't the same. Reading the documentation, it says...

If there is no custom database used for the device model, and 'initialize_device_model' is set to true in the constructor of ChargePoint, the device model will be created or updated when ChargePoint is created.

So, it may be that we are editing the correct base model, but are not correctly indicating to the module that we are using a custom database. Let me read into this further...

@the-bay-kay
Copy link

Upon a reread of the initialization documentation, this line stood out

When the database is created for the first time, it will insert all components, variables, characteristics and attributes from the component config.

While it doesn't explain how to utilize a custom database, this does give us an avenue for configuring the base model. That is, if we edit this config, we should be able to set the correct URL... let's try that out and see how it goes.

@shankari
Copy link
Collaborator Author

@the-bay-kay I would read code, not documentation. We are going to modify this code. We should be able to read it. You can see where in the code the file is generated and why it is (or is not) overwritten.

@shankari
Copy link
Collaborator Author

I am looking at the code here
https://github.com/EVerest/libocpp/blob/8d74ff558945eb189738555be2d60b22800cf962/lib/ocpp/v201/init_device_model_db.cpp#L84

We should be able to add logs there and see what is actually going on.

@the-bay-kay
Copy link

the-bay-kay commented Oct 25, 2024

I was looking at DeviceModelStorageSqlite::get_device_model(), because it seems that this function is also where the initial statement above is created -- though the execution function itself doesn't seem to elucidate much...
If we're looking at this from the device_model_storage-sqlite.cpp side of things, find that the db object is initialize here, using DatabaseConnection()

Let's put some logs around to shed some light on this...

@the-bay-kay
Copy link

the-bay-kay commented Oct 25, 2024

Just connecting some of the dots for my understanding:

How I'm Recompiling... Just as an aside (and for my own book keeping), I wanted to record how I'm re-compiling these modules for testing in the docker container (adding logs, making changes, etc.) - The `manager/Dockerfile` runs the install script via
/entrypoint.sh run-script install
  • Looking at this, this simply calls /ext/install.sh, which creates the build directory and runs the CMAKE. So, for my own remake, I'm running...

    rm -rf /ext/source/build \
      && /ext/scripts/install.sh

    Since I ran into issues running this fragment without clearing the build dir:

    cd /ext/source/build \
      && cmake =DBUILD_TESTING=ON ..\
      && make install -j6 \
      && make install_install+everest_testing \

    EDIT: I forgot to record the error I ran into doing this, I think I should be able to builld like this. Regardless I'm almost certain there's a way to rebuild without removing /build, will investigate The error was a lack of resources for the docker container, solution described below.

@Abby-Wheelis
Copy link

Abby-Wheelis commented Nov 11, 2024

@Abby-Wheelis can you also run the SP1 demo on your laptop (per instructions at #74 (comment), from Katie's fork) so you can compare the working SIL logs with the not working HIL logs?

Trying this now while we're in the lab

UPDATE:
It seems to just be hanging on ERROR [manager builder 6/8] RUN go mod download

@the-bay-kay
Copy link

the-bay-kay commented Nov 11, 2024

@the-bay-kay have you tested this?

I have tested the changes, and we are charging with Sp-1 as expected. I have not tested with the PyJosEV changes -- likewise, the patch applies OK.

@Abby-Wheelis
Copy link

I'm unable to run git clone https://github.com/the-bay-kay/everest-demo.git; cd everest-demo; git checkout rollforward-demo; bash demo-iso15118-2-ac-plus-ocpp.sh -r $(pwd) -b test-demo -1 as it just keeps hanging on an error.

@shankari
Copy link
Collaborator Author

I bet its hanging because of the limited CPU/memory resources. @the-bay-kay you need to change your fork to bump up resources

@the-bay-kay
Copy link

the-bay-kay commented Nov 11, 2024

...you need to change your fork to bump up resources

I must have left that as is, since I don't know what a good "default resources" is (e.g., how much memory / what sort of CPU's people are using). I realize that conflicts with the "one-line" ability however, my bad! @Abby-Wheelis, while i'm making my changes local, try manually going into the cloned directory and changing everest-demo/docker-compose.ocpp201.yml as follows:

Wait, I did bump up the resources -- feel free to adjust them up further if need be. Let me test my one-liner as well...

@catarial
Copy link

I fixed the mismatched number of evse managers. The way you configure libocpp is dropping files in /usr/share/everest/modules/OCPP201/component_config/custom. There were a couple json files in here that specified a second connector and evse manager. I just deleted the extra files and it worked. It seems like the manager on the uMWC was able to connect to the external demo

@catarial
Copy link

Below is a patch that may be applied to iso_server.cpp, via running

cd /; patch -p0 -i ${path_to_patch_file}

enable_iso_dt.patch

I will include this in my fork of the demo, and push the changes shortly.

Currently trying to build with this patch

@the-bay-kay
Copy link

UPDATE: It seems to just be hanging on ERROR [manager builder 6/8] RUN go mod download

After a fresh clone + build, I'm unable to replicate this issue. Did bumping the resources in docker-compose.ocpp201.yaml change anything? I can look into this further -- let me write down the instructions for testing the OCPP integration first...

@Abby-Wheelis
Copy link

Did bumping the resources in docker-compose.ocpp201.yaml change anything?

Bumping up to 12gb did not change anything, I can try more when I'm back at my desk but trying to squeeze what we can out of our last hour in the lab!

@catarial
Copy link

catarial commented Nov 11, 2024

Results of lab session 2024-11-11:

We got the uMWC manager hooked up with docker demo. It required removing some files in /usr/share/everest/modules/OCPP201/component_config/custom

#74 (comment)

After this we could not get the ocpp authentication to work while charging, so we switched back to the dummy token validator and provider.

Once we verified that charging worked with the ocpp module enabled, we switched to Katie's patch. #74 (comment)

This patch seems to have worked partially. The charger was able to receive a schedule to offer 70kwh of power.

image

The max current stayed fixed at 32 A, even though we sent a schedule to lower it.

image

@shankari
Copy link
Collaborator Author

@catarial are these from the MQTT server? Did you look at the text logs on the server?
I don't think this is doing what you think it is doing. The SAScheduleTuple is sent from the charger to the car.
You should have run a script to send a max current from the CSMS to the station. Did you do that?
@the-bay-kay did you send over the instructions?

@the-bay-kay
Copy link

the-bay-kay commented Nov 12, 2024

In my rush earlier, I misunderstood and though the instructions were only needed for SIL, but I realize that made no sense considering we were testing OCPP in the lab. My bad. Belated, but here are the instructions for checking OCPP profiles are accepted correctly. I have pushed the patches in my SIL demo, and included the patch to view OCPP logs directly here -- this is applied same as the iso patch.

Testing OCPP Smart Charging Profiles

  • Start up the demo within docker, as described above.
  • On your host machine, navigate to everest-demo/demo-scripts/, and open MaxProfile10A4hr.json with your preferred IDE.
  • Change the timestamp found in the "startSchedule" field to reflect the current time and day -- as the duration of the schedule is 4 hours (14400 seconds), this value must be <4 hours from the current timestamp/
    • As an example: If my local time is 1:03 PM PST on Nov. 12th, we would change the timestamp to:
      "startSchedule": "2024-11-12T:00:00.000Z"
      Since the timestamp is in UCT, we adjust our timestamp by +8 (+7 for folks in CO). LIkewise, the start of the schedule occurs before current time, ensuring we will see its affects.
  • Next, we send the schedule to the CSMS, using the demo script provided:
    bash maeve-set-charging-profile.sh cp001 MaxProfile10A4hr.json
  • As this is sent, we can watch the everest-manager logs to confirm that the schedule has been accepted:
    // Above should the JSON message sent...
    2024-11-12 05:03:29.994257 [INFO] ocpp:OCPP201     :: Accepting SetChargingProfileRequest
    
  • Then, you should be able to charge as normal. To see the change in PMax, you may add a print statement on line 789 of /ext/source/build/dist/libexec/everest/3rd_party/josev/iso15118/evcc/states/iso15118_2_states.py to display charge_params_res.sa_schedule_list.schedule_tuples:
    // No Changes
    p_max=PVPMax(value=13800, multiplier=0, unit=<UnitSymbol.WATT: 'W'>)
    // With Schedule:
    p_max=PVPMax(value=6900, multiplier=0, unit=<UnitSymbol.WATT: 'W'>)
    

@the-bay-kay
Copy link

the-bay-kay commented Nov 12, 2024

Wrapping up the SIL rollforward, I have created patches for the following files:

List of Patches
  • OCPPV201/InternalCntrlr.json
  • EvseV2G/iso_server.cpp
  • JsEvManager/index.js
  • PyJosEv/module.py
  • ext-iso15118-2/evcc
    • ev_states.py
    • iso15118_2_states.py
    • com_session_handler.py

Additionally, I have imported the power_curve.py file used for schedule computation.

When running, we are failing shortly after we receive the ChargeParamDiscoveryRes:

Error Logs...
2024-11-12 15:39:07.915230 [WARN] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: before adjusting for departure time, max_current 10.000000, nom_voltage 230, pmax 6900, departure_duration 3610
2024-11-12 15:39:07.915326 [WARN] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: Requested departure time 3610, requested energy 55000.000000
2024-11-12 15:39:07.915432 [WARN] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: Departure time specified, checking to see if we can lower requirements
2024-11-12 15:39:07.915538 [WARN] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: Min hours to charge 7.971014, requested departure time in hours 0.000000, pmax is unchanged
2024-11-12 15:39:08.056520 [INFO] evse_manager_1:  ::                                     CAR ISO V2G ChargeParameterDiscoveryReq
2024-11-12 15:39:08.056925 [INFO] evse_manager_1:  :: EVSE ISO V2G ChargeParameterDiscoveryRes
2024-11-12 15:39:13.041813 [ERRO] iso15118_charge void dlog_func(dloglevel_t, const char*, int, const char*, const char*, ...) :: Timeout waiting for next request or peer closed connection

Looking at the logs, we correctly read the DepartureTime within iso_server, but then it seems we fail to convert it into hours (See: "Min hours to charge 7.971014, requested departure time in hours 0.000000"). Let's debug further...

@catarial
Copy link

catarial commented Nov 12, 2024

@catarial are these from the MQTT server? Did you look at the text logs on the server? I don't think this is doing what you think it is doing. The SAScheduleTuple is sent from the charger to the car. You should have run a script to send a max current from the CSMS to the station. Did you do that? @the-bay-kay did you send over the instructions?

@shankari

Those were screenshots were from the UI of the car simulator. We did use the script to send a max current to the ocpp server. We were able to verify that the ocpp server was receiving it, but it wasn't changing on the charger. We used the charging-profile-1.json and charging-profile-2.json. We didn't use the charging profile that Katie said to use so that could be it.

I believe these issues can be tested in SIL though, since we were able to verify that the car is seeing the PMax of the charger. The issue is just getting the charger to respond to the ocpp server.

@the-bay-kay
Copy link

the-bay-kay commented Nov 12, 2024

The rollforward-demo branch of my everest-demo fork has been updated to support renegotiation! We need to make a few polishing changes to the node-red flows, but the renegotiation demo is now officially running off of release 2024.9.0, and does not require a custom image to function. Initial testing shows that we are functioning at parity with our 2024.3.0 based demo, but I plan to update this thread with a checklist of tests to ensure we are functioning as intended.

@shankari
Copy link
Collaborator Author

@the-bay-kay I am not sure what you mean by this

does not require a custom image to function

It does require a custom image, in that we have to rebuild after applying the patches, right?

@the-bay-kay
Copy link

in that we have to rebuild after applying the patches, right?

Right -- that is what I meant, that was bad phrasing. I was just specifying that we rebuild with patches, rather than pulling a prebuilt image that I'm keeping on my personal GHCR.

@shankari
Copy link
Collaborator Author

For the record, it is better to not rebuild with patches during the single line demo, since it requires larger amounts of memory/CPU and needs people to wait for the compile to finish. But I think I can deal with that as part of my cleanup.

@the-bay-kay
Copy link

For the record, it is better to not rebuild with patches during the single line demo, since it requires larger amounts of memory/CPU and needs people to wait for the compile to finish. But I think I can deal with that as part of my cleanup.

Gotcha! Once we're at that point, I can host a copy on my GHCR and link it here, or we can host one on US-JOET, whichever will be better.

@the-bay-kay
Copy link

the-bay-kay commented Nov 13, 2024

Working on updating the node-red flows, it seems we're still having issues with the powermeter not reporting correctly (which is surprising, since the fixing patch PR 773 has been merged to 2024.9.0)

Screenshot

image

I'm assuming this is an issue with how my node-red flow is expecting/displaying the data? Let's investigate further. Edit yup, it seems the flow that the rollforward is based off of didn't include the visualization update. Let's fix that real quick...

@the-bay-kay
Copy link

With the power gauge fixed (and a missing patch added back in, woops!), working on adding the OCPP / ISO-15118 Logs to Node-RED flows. Using the logs developed here as a rough guideline. In order to capture OCPP messages, we needed to find the correct MQTT topic to subscribe too. For documentation's sake, I did so by looking at each module's interface file, and looking for a $ref field as an example. So, using everest/ocpp, we can now listen in to the OCPP messages... Let's clean these up into a more presentable format!

@shankari
Copy link
Collaborator Author

shankari commented Nov 14, 2024

@the-bay-kay is the missing patch + power gauge committed to your fork?

@the-bay-kay
Copy link

...is the missing patch + power gauge committed to your fork?

Yup! The basics of the ISO15118 Messages are added as well, though not cleaned up.

@the-bay-kay
Copy link

the-bay-kay commented Nov 14, 2024

I did so by looking at each module's interface file, and looking for a $ref field as an example. So, using everest/ocpp, we can now listen in to the OCPP messages...

While this is what the previous logs were doing, I don't think this is sufficient for either OCPP or ISO15118-2. That is: if our goal is to capture the high level ISO15118 calls (ChargeParameterDiscoveryRes/Req, PowerDelivery Res/Req, etc.), I do not believe these are being transmitted via the iso15118_ev or iso15118_charger topics. I came to this conclusion by going through the interfaces more closely, and watching the existing logs as we ran a charging sessions -- this gives us a nice basis for how we may want to display the data in HTML, but we will need to sniff different topics in order to read the correct calls.

Starting with ISO15118, I believe we need pick up data transmitted between EvseV2G/iso_server.cpp <--> PyJosEV. (Pyjosev itself receives these through a different interface, but the idea is to pick up SECC <-> EVCC). So, let's take some time and see if we can't deduce what topic(s) these are being transmitted on.

We already have a steady stream of information concerning the ISO15118-2 call response stream in the following logs:

2024-11-14 15:02:21.782608 [INFO] evse_manager_1:  ::                                     CAR ISO V2G ChargeParameterDiscoveryReq
2024-11-14 15:02:21.783258 [INFO] evse_manager_1:  :: EVSE ISO V2G ChargeParameterDiscoveryRes

So, rather than trying to come up with our own path, let's find where these are published, and tack on an MQTT broadcast to these. We know this is the EvseManager module: searching there, I believe we subscribing to these messages here, and logging them if session logging is enabled (which it is). This is finally added to the session log here. So, let's see if we can't piggyback off of this log!

I still think there's an easier way to do this, though. If EvseManager is subscribing to these publications and adding them to the session log, I'd assume we should likewise be able to subscribe to these messages from NodeRED, without EvseManager acting as an intermediary... I want to find exactly how we're subscribing to these messages!

@the-bay-kay
Copy link

the-bay-kay commented Nov 14, 2024

So, the subscription to ISO15118 messages (here) is a member function of EvseManager's r_hlc, an object defined using a generated interface (docs) for the iso15118_charger module (we use EvseV2G). It makes sense that the subscription will occur in the V2G... I may have just traced my way in a circle back to iso_server.cpp... If nothing else, I've got a better understanding of this workflow -- let's 2x check this module to see if we can't find the mqtt topics we're looking for

@shankari
Copy link
Collaborator Author

Closing this based on #84 (comment)
and since any further tweaks will be as a result of testing

@shankari
Copy link
Collaborator Author

Re-opening this briefly to get the changes into main.

Checking the fork (main...the-bay-kay:everest-demo:rollforward-demo, we have this list of files to work on:

  • config-sil-ocpp201-pnc.yaml (no changes needed since we are going to use one EVSE except for the error history module, do we need it?)
  • demo-iso15118-2-ac-plus-ocpp.sh (most of the work, bunch of patches to apply)
  • demo-scripts/MaxProfile10A4hr.json (will not include since this will be the "ISO integration")
  • docker-compose.ocpp201.yml (retain renaming flow)
  • manager (insert patches)
  • patches (copy over)
  • nodered/config/config-sil-iso15118-ac-flow.json (which of these two has the correct changes?)
  • nodered/config/config-sil-two-evse-flow.json
  • nodered/scripts/preview_curve_4_nodered.py

@the-bay-kay do you know why two nodered flow files have changes?

@shankari shankari reopened this Nov 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants