Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Particle.unsubscribe() doesn't preserve system subscriptions #2293

Merged
merged 2 commits into from
Apr 1, 2021

Conversation

sergeuz
Copy link
Member

@sergeuz sergeuz commented Mar 3, 2021

Problem

Particle.unsubscribe() clears system subscriptions to cloud events along with the application subscriptions and disrupts functioning of IOTA updates.

Solution

The list of registered event handlers is managed by the protocol layer that doesn't distinguish between system and application events. As a workaround, the system layer now re-adds system subscriptions after clearing them via the protocol's spark_protocol_remove_event_handlers().

Steps to Test

Verify that a product application based on the example code below can be updated OTA. Alternatively, add some logging to the systemEventHandler() function and verify that the system correctly receives particle/* events during handshake.

Example App

#include "application.h"

SYSTEM_MODE(SEMI_AUTOMATIC)
SYSTEM_THREAD(ENABLED)

// PRODUCT_ID(...)
// PRODUCT_VERSION(...)

namespace {

bool updatePending = false;

} // namespace

void setup() {
    System.disableUpdates();
    Particle.unsubscribe();
    Particle.connect();
}

void loop() {
    if (!updatePending && System.updatesPending()) {
        updatePending = true;
        System.enableUpdates();
    }
}

References

  • [ch74478]

@sergeuz sergeuz added the bug label Mar 3, 2021
@sergeuz sergeuz modified the milestones: 2.0.1, 2.0.2 Mar 3, 2021
!Particle.subscribe("spark", systemEventHandler, MY_DEVICES)) {
LOG(ERROR, "Particle.subscribe() failed");
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since there is no error handling other than logging a failure, would it make sense to break these out into separate calls like they were before so spark is always attempted to be registered, even if particle fails? We could probably also change the LOG to LOG_DEBUG to keep the file size down.

Copy link
Member

@avtolstoy avtolstoy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Shame that we cannot easily trigger spark/ or particle/ event to add some kind of a test.

@sergeuz sergeuz force-pushed the unsubscribe_system/ch74478 branch from 4ed1aa9 to 955289b Compare April 1, 2021 11:46
@sergeuz sergeuz merged commit a10eefe into develop Apr 1, 2021
@sergeuz sergeuz deleted the unsubscribe_system/ch74478 branch April 1, 2021 12:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants