*Re-mapping `updateAttributes` endpoint to use
`PATCH` and `PUT`(configurable) verb
*Exposing `replaceById` and `replaceOrCreate` via
`POST` and `PUT`(configurable) verb
Fix `getIdFromWhereByModelId()` to correctly detect the situation
when "bulkUpdate" performs a write operation using a where filter
containing both id attribute but also all other model attributes.
This should significantly improve the performance of change replication,
because the cost of running rectifyAll is very high.
Improve the id-detection algorithm in the "after save" hook
to correctly handle "updateAttributes" as a single-model change
and DO NOT trigger full "rectify all" scan.
Add end-to-end unit-tests verifying enforcement of access control during
conflict resolution.
Implement two facade methods providing REST API for Change methods used
by conflict resolution:
PersistedModel.findLastChange
GET /api/{model.pluralName}/{id}/changes/last
PersistedModel.updateLastChange
PUT /api/{model.pluralName}/{id}/changes/last
By providing these two methods on PersistedModel, replication users
don't have to expose the Change model via the REST API. What's even
more important, these two methods use the same set of ACL rules
as other (regular) PersistedModel methods.
Rework `Conflict.prototype.changes()` and `Conflict.prototype.resolve()`
to use these new facade methods.
Implement a new method `Conflict.prototype.swapParties()` that provides
better API for the situation when a conflict detected in Remote->Local
replication should be resolved locally (i.e. in the replication target).
Correctly handle the case when the model is attached multiple times
during the lifecycle, this happens because `loopback.createModel`
always makes an attempt to auto-attach.
1) Add integration tests running change replication over REST to verify
that access control at model level is correctly enforced.
2) Implement a new access type "REPLICATE" that allows principals
to create new checkpoints, even though they don't have full WRITE
access to the model. Together with the "READ" permission, these
two types allow principals to replicate (pull) changes from the server.
Note that anybody having "WRITE" access type is automatically
granted "REPLICATE" type too.
3) Add a new model option "enableRemoteReplication" that exposes
replication methods via strong remoting, but does not configure
change rectification. This option should be used the clients
when setting up Remote models attached to the server via the remoting
connector.
- `loopback.registry` is now a true global registry
- `app.registry` is unique per app object
- `Model.registry` is set when a Model is created using any registry method
- `loopback.localRegistry` and `loopback({localRegistry: true})` when set to `true` this will create a `Registry` per `Application`. It defaults to `false`.
Modify the files to export a model factory function accepting
a `registry` argument. This is a preparation step for per-application
models - see #1212.
Deprecate `Change.handleError`, it was used inconsistenly for a subset
of possible errors only. Rework all `Change` methods to always report
all errors to the caller via the callback.
Rework `PersistedModel` to report change-tracking errors via the
existing method `PersistedModel.handleChangeError`. This method
can be customized on a per-model basis to provide different error
handling.
The default implementation emits `error` event on the model class,
users can attach an event listener that can provide a custom error
handler.
NOTE: Unhandled `error` events crash the application by default.
Use the recently added context property `ctx.instance` to improve
the accuracy of the algorithm detecting whether a single or
multiple models were deleted.
Modify `Change.diff()` to include current data revision in each
delta reported back. The current data revision is stored in
`delta.prev`.
Modify `PersistedModel.bulkUpdate()` to check that the current data
revision matches `delta.prev` and report a conflict if a third party
has modified the database under our hands.
Fix `Change` implementation and tests so that they are no longer
attempting to create instances with duplicate ids.
(This used to work because the memory connector was silently
converting such requests to updateOrCreate/findOrCreate.)
Rework the Change model to merge changes made within the same
Checkpoint.
Rework `replicate()` to run multiple iteration until there were no
changes replicated. This ensures that the target model is left in
a clean state with no pending changes associated with the latest
(current) checkpoint.
Extend `PersistedModel.replicate` to pass the newly created checkpoints
as the third callback argument.
The typical usage of these values is to pass them as the `since`
argument of the next `replicate()` call.
global.since = -1;
function sync(cb) {
LocalModel.replicate(
since,
RemoteModel,
function(err, conflicts, cps)
if (err) return cb(err);
if (!conflicts.length) {
since = cps;
return cb();
}
// resolve conflicts and try again
});
}
Before this change, in the case of a one-way replication, the remote
checkpoint was never updated, thus the "diff" step had to check
all changes made through the whole history.
This commit fixes the problem by creating a new remote checkpoint
at the same time when a local checkpoint is created.
It is important to create the new checkpoint before the replication is
started to prevent a race condition where a remote change can end up
being associated with an already replicated checkpoint.
Rework the "replicate()" to create a new source checkpoint as the first
step, so that any changes made in parallel will be associated with
the next checkpoint.
Before this commit, there was a race condition where a change might
end up being associated with the already-replicated checkpoint and thus
not picked up by the next replication run.