-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decision Proposal 052 - Draft Standards Feedback Cycle 5 #52
Comments
Hi James, |
Hi James I just wanted to bring back up something we posted from cycle 4 with regards to start and endtime for transaction. You responded saying you would change the number of days but what we need clarification on is how start and end time work. |
In response to the query from Li Jiang I would be interested in opinions from the community for the addition of an account creation date on account. Also, if added, should it be mandatory/optional and should time be included. -JB- |
Hi @anzbankau, I have previously received feedback on start/end times. I find it intuitive that start refers to the bound of the first record and end is the bound for the last record. It has been suggested, however, that start should be the older date and end should be the newer date. I would appreciate thoughts on which of these the community finds more intuitive. Alternatively if another set of names would be less ambiguous that may be preferred. -JB- |
Account creation date could be useful for ID verification use case - eg the longer an account has been established the less likely that the account owner's identity has been compromised. |
In response to the query from Li Jiang I would be interested in opinions
from the community for the addition of an account creation date on account.
Also, if added, should it be mandatory/optional and should time be
included.*
*JB*
From a Consumer perspective, you’d expect something as fundamental as
when-did-I-open-this-Account to be available data.
From a Comparison perspective, you’d want it mandatory because:
1. Cases where things change on a fixed period after account opening. Term Deposits. Introductory interest rates. Bonus card points offers. Fees that depend on time in account (may not exist in Big Four products right now,
but did several years ago).
2. Ability to match an account up to the market environment / offer that
existed when they opened it.
3. Propensity to switch is highly correlated to time in account, and may
suggest different UX.
|
Hi James Our current channel experience is that start date is always the older date and end date is newer, therefore we would be happy to align to this going forward. |
I’m from ID Exchange Pty Ltd, in addition to our own consumer consent IP and systems, we represent Digi.me in Australia. Digi.me is an enabling personal data sharing platform which has been designed with both privacy and security by design at it's core. Importantly as data "pipes" the platform does not see, touch or hold data. The digi.me platform as a new enabling technology that allows the CDR to truly represent the operational intent of "consumer controlled data" with transparent, secure and private data sharing. Consent certificates that are exchanged within the digi.me system include:
Please reference: https://digi.me/private-sharing-sdk The first cut standards appear comprehensive and well structured for a Business to Business API, however I don’t see API’s and consent processes for Business to Consumer. What if I want to take my my data and store it in my own repository of choice (whereas digi.me provides SSL split key encryption and data is normalised into single inter-operable ontology) and from there the individual can choose who to share it with? The Consent flows in the sample CX represent a similar challenge, well defined for business communications but lacking in consumer centricity. As a consumer how do I exercise control over my own data. By control I mean how do I take it into my repository and share it from there. While it’s not in a repository of my control I don’t actually control it or have full agency of my personal data, I am therefore restricted to act on it, in the manner afforded by the controls the holding entity gives me. How can I ensure that ALL other actors, specifically including the actor currently holding it, isn’t using it in a manner I don’t consent to. I don’t see any Standards around me holding my own data, or me revoking consent to the currently holding entity in these standards and processes, this should be incorporated as today Consumers have this right manually. In short I see these so far as standards for corporate management of consumer data, not consumer management of consumer data, there needs to be a consumer centric approach to holding the data not basic consent flows. And the revocations need to be able to be addressed to ALL entities. |
@mozocdr the feedback on the desirability of an account open date field is noted as are the comments regarding the field being mandatory. I am concerned that this data element may not be present in every account ledger in the industry. If the data is not available in some cases but the schema makes it mandatory then some banks may be unable to be compliant. If the field is optional it only means that it may not be present if the data is not held. Would this still address the concerns you have raised? Also, we have stated before that we are actively discouraging matching an open account with a market offering. The account schemas duplicate much of the feature and pricing information so the need for this is seen as unnecessary. It is also potentially misleading to make such a connection as many products can be open to negotiation in specific circumstances which would not be represented in the generic offering data. -JB- |
On the topic of start and end date query parameters there have been some internal discussions and we are considering one of the following three approaches (which we would apply across the standards when date bounding is available):
What does everyone think? -JB- |
Hi James, In addition to ID Exchanges commentary today, I would like to mention that digi.me is a member and active contributor on Consent standards for the Kantara Initiative : see https://kantarainitiative.org/ Digi.me's participation is around the forming of global consumer centric consent receipts via open standards based on ethical approaches built with Privacy and Security by Design principles. As such digi.me has been involved in Kantara work groups as sighted in the attached link on consent and information sharing. We trust this will be of interest to Data61/ACCC as follows: We note : digi.me's framework is already GDPR compliant and allows bespoke consent receipts to cater for countless specific purposes with chain of custody incorporated rather than blanket consent practices which may lead to governance and compliance or unnecessary extended data risk issues down track. We also believe with the ability to curate App/data specific Consent Permissions that user update will be higher via more attuned, transparent and trusted data transactions. As technologist in the area of Privacy & Security assured data sharing, please let us know if the CX Work group requires any further information about aligning with international standards that promote data portability and consumer centricity. We can provide more detailed architectures under NDA to support the overarching objectives, rules and standards pertaining to CDR Accreditation. With digi.me's platform ready to underpin OB/ACCC requirements we are keen to ensure the Australian ecosystem can leverage a lengthy and deep investment and utilise inter-operable Consent Frameworks plus the digi.me App development environment to craft and accelerate solutions to market. Kind regards Jo - IDX |
@IDexchange, while I understand (and quite like) the use case you are seeking to promote, the model you propose would constitute a significant change to the structure of the regime as laid out in the recommendations of the Farrell report. The regime will include the ability for a consumer to obtain the data covered by the designation instruments directly, and a standard for this will be created, but this is more likely to be as a download or on screen presentation via an existing, authenticated channel. Another approach would facilitate data transfer by customers to non-accredited recipients and would undermine the safeguards established for the regime. The model you describe could best be achieved under CDR by establishing an accredited entity that would, as a service, obtain the customer's data, immediately encrypt it and make it available for offline storage by the customer. This would be an interesting innovation that the regime would facilitate in its current form and may be attractive to customers. For version 1 of the CDR we will not be considering alternate consent arrangements or mechanisms unless directed to by the ACCC. -JB- |
@JoIDX, thanks for your post. I will pass your interest on to the CX working stream. -JB- |
James - Page one from the Farrell Report - far below. Thank you for your updates, I will take up the constraints of V1.0 up with Treasury/ACCC as v1.0 does not in our view demonstrate the essence of a true Consumer Data Right as V1.0 actions what is perhaps best described as a Corporate to Corporate Data Right. We can and must do better. ID Exchange feels strongly about the ability to accredit advanced solutions that parallel the consumers right to obtain a synced encrypted copy of their data with the same safeguards inside the system as per the ability they are being granted to assign this PII data from party A to party B (which attributes to more and more duplication of PII - aka greater risk of breach). We understand we table an new enabling innovation approach, which is again core to the Farrell Report regarding opening up competition and innovation to generate new economic stimulus and new products and tailored services for consumers. At present with existing technology and only small changes needed to incorporate a consumer centric B2C alternative within your accreditation program (with our ability to meet implementation of a Feb 2020 deployment), we feel it is in the benefit of ALL to table such matters whilst you are in the CX decision making process. Consent is highly complex when interwoven for the types of granular and compliant processes required. Digi.me is a leader in field for these types of complex at scale tech services. We believe that such tech stack solutions will be critical to the success and ability for the market to activate new interwoven personal data solutions to market as led out by Open Banking. We would appreciate how we can look to find a work-around on this vital and key matter. Respectfully submitted, Jo - IDX .................. Farrell Report - Cut from First 2 Paragraphs .... It is designed to give customers more control over their |
Thanks James. I think if it's available, it ought to be included. Mandatory-if-held, if you like. AJ |
Hi James, option 3 is the best for us as it clearly defines the fields and keeps the API the same (given development is well underway we are trying to minimise changes to our code base). |
At this point of the draft standards feedback for cycle 5, we'd like to provide our feedback for the Products API (minor and for update post-v1 baseline), Accounts API rate tier structure support, and the account creation date and time field that was raised in this issue / thread. Products APISome minor housekeeping / alignment areas:
Accounts APIRate tiersNAB are not comfortable with the response to applying Rate Tiers at the account level as some products are contracted to a tiered rate not to a single rate which the following response provided by Data61 in Cycle 4 (#48) seems to infer: “Rate tiers will not be added to the account structure as the expectation is that rates have been applied at the time of account origination” (#48 (comment)). At NAB, there are a number of products for which there are contracted tiered rates, where the rate will vary during the life of an account based on the daily balance. For example, “NAB Retirement Account” or “NAB Business Management Account” (https://www.nab.com.au/personal/interest-rates-fees-and-charges/indicator-rates-deposit-products). The rate applied on an account is based on the balance on any given day and needs to be reflected in the Accounts API's response rather than a single one rate. The current
In the Get Product Detail endpoint, Data Holders have this flexibility through the tier structure in which to provide the rate and balance range. We'd like to understand why this will not be replicated for the Get Account Detail endpoint We'd like this feedback to be reconsidered with the additional reasoning provided above. Account creation date and timeIn response to the inclusion of an account creation date and time, if included, there must be a clear definition and interpretation of what this meta-data represents and how it can be used. Using the account creation date and time to infer the relationship length between the customer and the Data Holder might be misleading when taking into consideration the CDR data extents and the way accounts could be closed and then opened. Scenarios such as lost & stolen card replacement process, or product swap scenarios which involve a new contract date that is different the underlying account creation date. The account creation date and time is valid data and it could be misleading under certain use cases. We support the openness and competitive behaviour that the CDR sets out to achieve, but could this specific piece of data be misused? Should the CDR Standards include the account creation date and time, then this field should be optional, similar to the way the date and time fields for transaction data is treated. |
Minor Correction: Lost Descriptions for Account BalancesWith the refactoring of balances many of the descriptions have been lost e.g.: Also, |
|
Hi @JamesMBligh , I'm just seeking some clarification on decision proposal #21, particularly around transactions per second thresholds, if possible. As I interpret it we, as Data Holders, will be required to report our performance metrics to the ACCC via the admin/reporting endpoints and will be able to (if we wish) throttle requests if the TPS thresholds are exceeded. The question that has come up is will we need to provide evidence that we are able to perform up to the TPS thresholds before we start serving endpoints? I presume not, as not all Data Holders will receive enough demand to reach anywhere near these thresholds anyway. But it would be great to have this clarified. Thanks! |
Hi James, We are seeking a couple of clarifications:
Timezone documentation appears inconsistent (in Common Field Types)
Could it be clarified if all 3 of these types must always be in UTC? |
The account address technical issue we mentioned in Draft Standards Feedback Cycle 4 has not yet been addressed. |
Hi James, |
@dimuthnc posts this query (moved here from a separate issue): In the 2018 Christmas working draft, we have noted the use of x-v and x-min-v and x-PID-v header parameters (which are intended to determine the version of the request/response). As we understood, data holders are supposed to provide support for these HTTP headers in all consumer API endpoints. On the other hand, it is required to specify the API version as a path parameter (in each endpoint). In a conflicting scenario, how should we prioritize which version to use? As an example, consider the below scenario.
In the above scenario, if we only consider the x-v and x-min-v headers, we should respond according to the V6 version( highest supported version between x-min-v and x-v). But if we consider the Request path, we need to respond according to the V7 version. How to proceed in such a conflicting scenario? On a separate note, we have noted the proposed discoverability API[1]. Once we have this API, we can simply use that to discover available API versions. If so, as a data recipient, do I still need to use this x-v, x-min-v or x-PID-v headers? [1] - #19 |
@dimuthnc, in response to your query I would refer you to the final decision #4. Essentially, the regime has two different versions:
The version in the URI refers to the standard version and the headers refer to the endpoint version so there is no need to prioritise between them. -JB- |
Hi James Regarding the endpoint for direct debits - GET /banking/accounts/{accountId}/direct-debits The financialInstitution data element under BankingAuthorisedEntity is mandatory, we would like to make this optional as it will not always be available to be returned. |
This is somewhat related to @IDexchange's remarks, particularly about the agency of the consumer. The current design does seem to falls short of "giving consumers greater control over their data" as the service (API) providers remain present and in control through the transaction of data. Was something like the Verifiable Claims model—where users are given their data as claims to do with as they please—explored before pursuing the current API design? If I understand correctly, one of the shortcomings of the current design is a failure in privacy. This design allows a data provider to track what I'm doing outside of their service by tracking which third parties ('data consumers'?) are requesting data about me, and what they're requesting. This allows them to infer things about me to which I might not consent if asked directly. For example, I understand that with the given timeline for implementation, the current design is the one that will be released. I would however advocate for privacy implications being thoroughly detailed and published so that, among other things, consumers might be aware how their expected privacy might be violated by less-than-scrupulous data providers. |
@anzbankau regarding financialInstitution field. If you are deriving Direct Debits delivered through BECS from transaction data how is the financialInstitution not going to be known? -JB- |
Minor Product Reference CorrectionsJames, Brian,
Note: I did try to confirm that the 'missing' enums were not removed due to feedback, but if I missed the post(s) please ignore. Thanks, |
Hi James, considering that specs for all endpoints of a given Standards version are being collated in a single swagger file, how will documentation for different x-v versions of a single endpoint be managed? |
Apologies for not being more responsive in this thread. The action has been in the V1 threads and InfoSec for the last few weeks so I will try and correlate the feedback elsewhere to the issues raised here as many are consistent. Rather than dump everything into a single comment I'll try to comment to all of the feedback items individually. -JB- |
Regarding product categories. Business loans and overdrafts being excluded was an oversight and will be added back in. Description has also been modified as suggested. -JB- |
I believe rate tiers has been dealt with and resolved (at least for v1 completion) in the Product Reference thread and the Accounts & Balances v1 thread 1. -JB- |
Account creation date has been resolved by including an optional creationDate field to the account structure. -JB- |
I believe most of the descriptions have been updated and corrected via the v1 threads (or at least will be once all of the changes for these decisions are applied to the swagger). -JB- |
The meta object is a part of the request payload structure as per the standard payload conventions that align to JSON API standards. The fact that it is optional and empty does not invalidate its inclusion in the structure for bulk account based APIs. I believe I have gone through and updated all of the entries for meta in swagger to make them optional unless specific meta fields are required (for instance, for paging). -JB- |
In response to your query @mattp-rab, I don't believe proof of scale will be a requirement for going live with an implementation. This will be up to the ACCC as the regulator, however, this has not been a part of discussions that I have been involved in. -JB- |
Regarding @anzbankau clarification requests:
-JB- |
In response to @WestpacOpenBanking I believe the technical issues raised regarding address leakage have been addressed. The account address will remain and the CX team have attempted to address identified customer misunderstandings through improved data cluster language. -JB- |
Regarding the error object, the structure of the object will be consistent across the standards and new industries come on board. The intent is also that, as error types are identified, they are given unique IDs. Whether ID ranges for specific industries will be used or just the assurance of uniqueness is not yet determined. -JB- |
The feedback regarding direct debit authority structure and optional fields was addressed in the Direct Debit Authorisation v1 thread. The financialInstitution and description field will remain mandatory but free form. -JB- |
In response to the comments from @NickDarvey, suggestion that the standard does not give customers greater control over their data is demonstrable false. The standards clearly improve the current situation which is characterised by no sharing at all or screen scraping with minimal customer control. The CDR clearly provides greater control than this. It is valid to suggest that there could be higher standard achieved or other models could have been applied. Version 1 of the standards is constrained by the recommendations of the Farrel Report. This was focussed on a consent based model utilising OIDC and based on the UK Open Banking standards. As such the work here has been constrained by that brief so investigation of other models has not been formally undertaken. This is, however, is an area where the perfect becomes the enemy of the good. The CDR as it stands in version 1 will be provide a demonstrable improvement to the current status quo. It will also form a foundation that other, potentially better or different, models can be suggested and potentially tested. This is why the standards have been designed to accommodate versioning and extensibility. -JB- |
Regarding @anzbankau comments on Product Reference:
-JB- |
In response to @paganwinter, the problem of multiple versions in a single swagger is known. Frankly, it's a doozy and I'm not yet clear what to do about it. One option that has been suggested so far are to generate the swagger from java models but this means that we are creating a proprietary replacement for swagger which is widely understood. Another alternative is that we could have a single, non-conformant, swagger with all of the versions in there (marked by an x-version extension field) and then we generate multiple swaggers from this, maybe one for each end point version and one containing only the most recent versions. This has it's own problems in that we will have to build the conversion pipeline (which we need for Java models too). I'm open to suggestions though :) -JB- |
I think I've addressed everything. I'll be committing swagger changes shortly. I'll also be closing this thread and opening thread 6. Thanks everyone for the work getting to this point. -JB- |
This issue has been opened to capture feedback on the standards as a whole. The standards site will be incrementally updated to accommodate this feedback. This is the fifth cycle of holistic feedback for the standards.
-JB-
The text was updated successfully, but these errors were encountered: