* Fix missing option arguments in scope.js
Added option arguments in find.relatedmodel
Added case test
* Add order filter in verify function
Fix for cloudant test
In #1298, the spec/doc for polymorphic relations was reviewed
**hasX relation**
- `type`: **hasMany**
- `as`: redefines **this** relation's name (optional)
- `model`: **modelTo**
- `polymorphic`:
- typeOf `polymorphic` === `String`
- matching **belongsTo** relation name
- `foreignKey` is generated as `polymorphic + 'Id'`,
- `discriminator` is generated as `polymorphic + 'Type'`
- typeOf `polymorphic` === `Object`
- `as`: **DEPRECATED** should display a warning,
replaced by `selector`
- `selector`: should match **belongsTo** relation name if the
latter is defined with {polymorphic: true}
- (required) if both foreignKey and discriminator
are **NOT** provided
- (extraneous) if both foreignKey and discriminator
are provided
- `foreignKey`: A property of modelTo, representing the fk to
modelFrom's id.
- generated by default as `selector + 'Id'`
- `discriminator`: A property of modelTo, representing the actual
modelFrom to be looked up and defined
dynamically
- generated by default as `selector + 'Type'`
---
**belongsTo relation**
- `type`: **belongsTo**
- `as`: redefines **this** relation's name (optional)
- `model`: **NOT EXPECTED**: should throw an error at
relation validation
- `polymorphic`:
- typeOf `polymorphic` === `Boolean`
- `foreignKey` is generated as `relationName + 'Id'`,
- `discriminator` is generated as `relationName + 'Type'`
- typeOf `polymorphic` === `Object`
- `as`: **DEPRECATED**: should display a warning,
replaced by `selector`
- `selector`:
- (required) if both foreignKey and discriminator
are **NOT** provided
- (extraneous) if both foreignKey and discriminator
are provided
- `foreignKey`: A property of modelTo, representing the fk to
modelFrom's id.
- generated by default as `selector + 'Id'`
- `discriminator`: A property of modelTo, representing the actual
modelFrom to be looked up and defined
dynamically
- generated by default as `selector + 'Type'`
The PR superseeds the existing deepMerge algorithm used to merge
settings of parent and child models with a new algorithm that allows
to specify the way each setting is merged or mixed-in.
This configuration of this algorithm uses a merge policy specification.
The `getMergePolicy()` helper of BaseModelClass can be used to ease
model merge configuration.
Next is presented the expected merge behaviour for each option.
NOTE: This applies to top-level settings properties
- Any
- `{replace: true}` (default): child replaces the value from parent
- assignin `null` on child setting deletes the inherited setting
- Arrays
- `{replace: false}`: unique elements of parent and child cumulate
- `{rank: true}` adds the model inheritance rank to array
elements of type Object {} as internal property `__rank`
- Object {}:
- `{replace: false}`: deep merges parent and child objects
- `{patch: true}`: child replaces inner properties from parent
The recommended merge policy is returned by getMergePolicy()
when calling the method with option `{configureModelMerge: true}`.
The legacy built-in merge policy is returned by `getMergePolicy()`
when avoiding option `configureModelMerge`.
NOTE: it also delivers ACLs ranking in addition to the legacy
behaviour as well as fixes for settings `description` and `relations`
`getMergePolicy()` can be customized using model's setting
`configureModelMerge` as follows:
```
{
// ..
options: {
configureModelMerge: {
// merge options
}
}
// ..
}
```
`getMergePolicy()` method can also be extended programmatically as
follows:
```
myModel.getMergePolicy = function(options) {
const origin = myModel.base.getMergePolicy(options);
return Object.assign({}, origin, {
// new/overriding options
});
};
```
New type that preserves string input as a string, but ensures that
the string is a valid Date.
Additionally, provides a .toDate function to provide the Date
object representation of the string.
* fix check for null
* add tests
* fix for early return
* Allow check for null and non-existent value
Some connectors uses a non existent prop instead of allowing null
Modified test case to look if null exists or the prop is non existent
* Check for null value with geo near query
* Apply requested changes
* change test to two users and simplify
* check error first
* Fix simple query test case with null value
* BDD for connectors w//o null support
* handle deep geo-near queries (#1216)
a dedicated mongKey is added in geo.nearFilter for mongoDB
fixes geo min distance tests as filter now expects an array
* Fix for string geoPoints
* Add geo point handle for ibmdb connectors
* Handle geo-point type for cassandra connector
Fix `_targetClass` on scope function when using hasManyThrough
relation with customized relation names and foreignKey/keyThrough.
This bug is cause by `_targetClass` uses the camel-case of
`relationName`(e.g.: if `relationName` is `bbb`, `targetClass`
would be `Bbb`), which is not exists.
This will also suppress "not exposed" warnings when generating
angular sdk, and generate end-points for this scope.
* Fixes#1275 Transform *fields* property into array
`Include` filter takes into consideration string property
'fields' and transforms it into an array containing this string.
* Added error handling for `include` filter.
* ExecTasksWithInterLeave now contains a try-catch block
in order to catch any unexpected errors.
* LinkManyToMany now checks if *modelToIdName* exists on
*target* before continuing.
* Added unit test for *include* with string fields
Before this change, when resolving full connector path, all errors were
ignored. As a result, when the connector was installed but not
correctly built (e.g. loopback-connector-db2 which uses a native addon),
a very confusing message was reported by LoopBack.
In this commit, I am fixing the code handling `require()` errors
to ignore only MODULE_NOT_FOUND errors that contain the name
of the required module.
The query-string parser used by express
https://github.com/ljharb/qs#parsing-arrays
limits the size of arrays that are created from query strings to 20
items. Arrays larger than that are converted to objects using numeric
indices.
This commit fixes the coercion algorithm used by queries to
treat number-indexed objects as arrays. We still maintain a strict
understanding of an "array-like object" to limit the opportunity for
subtle bugs. In particular, the presence of non-index keys is an
indication that the object was not intended to be interpreted as
an array.
- Rename `flush` to `deleteAll`
- Add `delete`
- Detect `delete/deleteAll` before running downstream test suites
- Fall back to unoptimized `deleteAll` when connector does not support
`deleteAll` but supports `delete`
- Return 501 for connectors not supporting `delete` or `deleteAll`
Defining a model relation with the name "trigger" causes the model not
able to insert records. No error is thrown when a model relation with
the name "trigger" is defined. Adding a check for the model relation
name "trigger" will now throw an error.
validateNumericality didn't test if attributes value is a number
only if it's type is number.
Further nullCheck had a wrong testing order. It first checked if
value is null, later if blank. Also null check only used two equals,
not three. We don't use blank() anymore, testing if variable is
undefined should be fine too.
Added tests covering validateNumericality.
Enhance the built-in memory connector to correctly support nested
queries for arrays in addition to objects.
E.g. if "friends" is an array of objects containing "name", then
{ where: { "friends.name": "Jane" } } should match records containing
a friend called "Jane".
Models attached to a KeyValue connector get the following *static*
methods:
Color.set(key, value);
Color.set(key, value, ttl);
Color.set(key, value, { ttl: ttl });
Color.get(key);
Color.expire(key, ttl);
Result of compat flag cleanup.
- Current implementation has a wrapper
DataSource.registerType() for
ModelBuilder.registerType(). This removes
the wrapper to encourage use of original
method
Allow direct save of changes on embedded model to be persisted on
parent document.
Person.embedsOne(Address);
Person.findById(someId)
.then(function(p){
var address = p.addressItem();
address.street = 'new street'
// This will now persist changes on parent document
return address.save();
})
[forward-port of #949]
The setting controls the strict mode used for embedded property types,
for example the type of "address" property in this model definition:
modelBuilder.define('TestEmbedded', {
name: 'string',
address: {
street: 'string',
},
});
It was previously completely undocumented. There are additional methods that add promises but I figure accurately documenting some is better than none. :)
Simplify DataAccessObject.create() and stop returning the
instance/array of instances. Users should always use callback (or
returned promise) to get the instance(s) created.
create() triggers
- before save
- after save
updateById() triggers
- before save
- after save
destroy() triggers
- before delete
- after delete
The implementation here is intentionally left with less features
than the regular DAO methods provide, the goal is to get a partial
(but still useful!) version released soon.
Limitations:
- `before save` & `after save` hooks don't provide `ctx.isNewInstance`
- async validations are not supported yet
- `persist` and `loaded` hooks are not triggered at all
- `before delete` hook does not provide `ctx.where` property and
it's not possible to change the outcome of `destroy()` using this
hook. Note that regular DAO does support this.
- updating embedded instances triggers update of the parent (owning)
model, which is correct and expected. However, the context provided
by `before save` and `after save` hooks on the parent model is sort of
arbitrary and may include wrong/extra data. The same probably applies
to the scenario when deleting embedded instances triggers update of
the parent model.
create() triggers
- before save
- after save
udpate() triggers
- before save
- after save
destroy() triggers
- before delete
- after delete
The implementation here is intentionally left with less features
than the regular DAO methods provide, the goal is to get a partial
(but still useful!) version released soon.