Forum Discussion
I would say if the instance can be uniquely identified with data on hand (as described above), then the datasource should be using that as the instance wildvalue, not some arbitrary other thing that could cause excess instances due to customer action or anything similar.
As far as data retention, I have found that decisions are often made that lead to loss of data and it is distressing. I just had a case where I pointed out that a datapoint label had a typo. Fixed, but the fix kills all old data for that datapoint. Why must the label be the index rather than a label tied to a persistent index?
I see similar problems for DS replacements. I suggested in a F/R long ago that it be possible at DS load time to upgrade from the previous DS version. I fully appreciate that new datasources with alternate structures should be created, but if there was a migration function you could select the datapoint mapping to avoid losing data (currently best option is to run both in parallel until you get enough new data to not look foolish to your clients). Preferably this would be builtin to the new datasource, so it would happen automatically or at least could provide guidance. That sort of mechanism could also handle my typo'ed datapoint issue.
Nuts and bolts stuff like that is hard to market, though :(.
Related Content
- 11 months ago
- 5 months ago