See also

Service Architecture Task Group

Also see SA Discussion pagesSort

Membership (as from 22/9/09)

EricBoisvert (Chair), AlistairRitchie, TimDuffy, SteveRichards, JeanJacquesSerrano (Co-Chair), AgnesTellezArenas, MarcusSen, DalePercival


  • Tasks:
    • develop the formal architecture required to implement services that deliver the use cases
      • GeoSciML profiles for WMS and WFS (standard requests that must be supported, standard responses expected)
      • Registry profiles for datasets, services, and for system specifications (i.e. feature-type definitions, query profiles, vocabulary services)
      • design of a CS/W implementation of this (if appropriate);
    • document the architecture including designing formal component conformance tests,
  • End date: Ongoing

GeoSciML Information Resources

WPS Services

Is this test interesting in other GeoSciML domains ?

WPS might be a good way to design interoperable services using our GeoSciML schemas for describing the parameters of the services.

-- ChristianBellier 28 Jul 2008

Architecture issues to be addressed on QuebecF2F meeting 22/09/09 - see DECISIONs):

A list of issues that the service architecture profile might address. Hopefully we can address some of these at the Quebec F2F. Please add any others you can think of.
  1. how are CGI urns resolved? Should codeSpace for gml:name values that hold the UUID for the feature (while we wait for gml:identifier) be the resource location of the (or a) service serving the feature in the first place
    • I suggest a URN resolution WPS would be a nice architecture solution (and a cool testbed demonstration) -- EricBoisvert - 2009-09-04
    • There is some discussion on this from the Oceans Interoperability Project -- DalePercival - 2009-09-23
    • DECISION - Eric to flesh out an example and discuss it with Simon Cox and his URN resolver specification. "The resource location for the urn resolver will be included in a vendor tag in the WFS's getcapabilities response.If your service produces urns then you must publish the location of the urn resolver in your GetCapabilities response".
    • According to the OGC OWS the place to put this information is the ExtendedCapabilities element within the OperationsMetadata which is of anyType. Individual software vendors and servers can use this element to provide metadata about any additional server abilities. -- DalePercival - 2009-09-23
  2. How are different resolutions of data identified, e.g. 1:20000, 1:100000, 1:1000000 scale data in the same bounding box. Different service URL's? Is this a problem? DECISION - is a data model issue
  3. MappedFeatures can be contacts, faults, geologic unit outcrops, etc. How are these distinguished--possibilities include different service URL's, or using criteria in the Filter part of the WFS request. DECISION - Might work using OGC Filter getfeature bodge.Best practice issue. Example filter for this bodge is:
         <wfs:GetFeature xmlns:wfs="" xmlns:ogc="" xmlns:gml="" xmlns:gsml="urn:cgi:xmlns:CGI:GeoSciML:2.0" maxFeatures="10" service="WFS">
           <wfs:Query typeName="gsml:MappedFeature">
             <ogc:Filter xmlns:ogc="">
                 <ogc:PropertyIsLike wildCard="*" singleChar="." escape="!">
  4. How should gsml: metadata element be used--separate document or service, or inline gmd:MD_metadata elements? MD_metadata elements might be part of gsml:GSML collection? DECISION - Becomes a testbed use case to set up a CSW metadata server.
  5. How to package a 'geologic information' collection about a particular area? is this a gsml collection with point, line and polygon MappedFeatures, along with related vocabularyItems, GeologicUnits etc. This is a service architecture question that impacts the data model (add gsml:Map to model? - see DECISION - Agreed this is in fact a data model question.
  6. Some 'standard' way to associate/identify SLD's to symbolize gsml:GeologicFeatures in a feature collection. DECISION - EB: it is actually part of the SLD spec that you can declare which WFS to use. No need to test it in next GeoSciML testbed no clear use case.
  7. Should data value specification schemes be specified by service architecture or by xml schema. this bears directly on discussion of CGI_Value in the data model. DECISION - Agreed is a data model issue.
  8. WMS-WFS coupling:
    • DECISION - Use the GSV/GA approach that 'binds' a WFS to a WMS by providing a GetFeature request to the associated service as a property in the GetFeatureInfo response. An example is the ll:gsmlHref property here. To ensure that service behaviour is consistent the property name should be consistent. We recommend: wfsGetFeature.
    • Note that the example request provided used the featureid parameter ( &featureid=gsml.geologicunit.16777549126930472 ). This assumes the service uses persistent gml:ids. If this is not the case then a BBOX, or similar, request should be used (basically anything that consistently returns an instance of the desired feature.)
    • Yes, I do feel dirty implementing this. -- AlistairRitchie - 2009-09-23
  9. Use of layers in WMS - how do we create metadata for layers.DECISION - SR had observed tha the ISO 19119 services metadata implmentation in the ISO 19139.xml schema did not seem to identify where
  10. Registration of services with CSW-- use of 'coupleResource', if/how to index layers (wms) or feature types (wfs), use of ISO19115 profile - DECISION - see previous item.
  11. WSDL vs getCapabilities, getCapabilities to ISO19115. DECISION - Eric to write an engineering report on common queryables with Steve Richard.
  12. use of 'common queryable' aliases to simplify query filter construction (like WFS2); DECISION - Bodge for next testbed looking to final implementation by WFS2/FES2. Use age and lithology alias as we did in testbed 3.
  13. how to specify whether a concept should be expanded in a query--i.e. should a search for age=Mississippian also get age=Visean. DECISION - See WFS2 stored queries and item 12).
  14. can age queries be treated as 1-D spatial queries, to allow for predicates like 'overlaps', 'includes', 'younger than', 'older than'. DECISION - See WFS2 stored query
  15. Some software does not implement 'ogc:filter' properly, for instance limiting the of number of and/or conditions. DECISION - Software problem.
  16. How to link portrayal rules with concepts of a vocabulary? DECISION - OneGeologyEurope WP3,5 and 6 will use this soon. The AUSCOPE project and some GEoSCIML testbed 3 .SLD services did this also: DECISION - This is an issue for CDTG.
  17. How to 'extend' a standard vocabulary? (to add new elements to an existing vocabulary - ex: to add new epochs in the Proterozoic of ICS chart for European Nordic countries [Then the vocabulary would no longer be the ICS Strat Chart, but a whole new vocabulary governed by the Nordic countries, not ICS. Same applies to the Ordovician stages in Australia. -- OliverRaymond - 17 Sep 2009]) [skos:collections provide a mechanism to define a collection of terms that may come from different vocabularies. -- SteveRichard - 2009-09-17] DECISION - THis is a CDTG issue.
  18. How to add into vocabularies concept description in various languages? [SKOS would allow multiple definitions with different language localization -- SteveRichard - 2009-09-17] DECISION - This is an issue for CDTG.
  19. Can WFS queries be specified against base classes or substitution groups of polymorphic types? For example, suppose I wanted to filter gsml:MappedFeature to find the mapped features with associated specifications having a particular name. The mapped features could have specifications of different (polymorphic) types (e.g. GeologicStructure subtypes, or GeologicUnit). Querying against supertype allows queries to be made on the common properties that all types of or derived from that supertype have. Would I query using an xpath with a parent-type (possibly abstract) element name? A PropertyEquals Query with something like: gsml:specification/gml:_Feature/gml:name or gsml:specification/gml:GeologicFeature/gml:name This would match gml:GeologicUnit or gsml:GeologicStructure with a matching gml:name. The concrete type would be encoded in the response. [related to #3]. DECISION - Geoserver issue liaison Alistair R and Ben C-D.
  20. Should users be able to make queries based on the information model without knowing the encoding choices made by the data provider, or are queries made in XML-space? For example, if a gsml:MappedFeature has its specification encoded by value, a user might filter by specification URN name using PropertyEquals query on: gsml:specification/gml:_Feature/gml:name[@codespace=''] (ignoring the fact that this is not supported by WFS 1.1) However, if the specification is encoded by reference, they would have to query PropertyEquals (assuming the name URN is used as the reference: gsml:specification@xlink:href and so the user is querying against the encoding, not the information model. They need to know how specification is encoded before issuing their query. Is this what youse() want? ( = the GeoSciML community)
    • (WFS 2.0 proposes a wfs:valueOf that is supposed to provide a way to tell the server to resolve whatever xlink href that might appear. I think it's silly. The client should not worry about serialisation artefacts when writing a query. -- EricBoisvert - 2009-09-17) DECISION - Geosevrver issue liaison Alistair and Ben.
  21. What database polymorphism patterns should GeoServer support? At the moment, we support one-table-per-concrete-type. There are at least three patterns for supporting inheritance hierarchies (might need some pictures): - One table per concrete type (table contains all properties for one type, including inherited properties). - One table per hierarchy (table contains all properties for all types in the hierarchy, with nulls for cells that do not have a particular property for a row of that type). - One table per type (table contains only the properties added in its level of extension). What database polymorphism patterns are used in the community? [couldn't all of these approaches be accomodated by binding the xml to database views instead of concrete tables? -- SteveRichard - 2009-09-17] DECISION - Geoserver issue liaison Alistair and Ben.
  22. Do you have database polymorphism use cases? How should GeoServer decide the mapped type of a property? For example: - If an expression on database values evaluates (in GeoServer) to some particular constant (e.g. true), encode type X. For example (pseudocode): if isNull(LOC_ACC) encode locationalAccuracy as CGI_TermValue ... if not isNull(LOC_ACC) encode locationalAccuracy as CGI_NumericValue ... Are there any other ways that polymorphic types may be instantiated? Have you given any thought to the reversibility of expressions required to allow queries to be made against polymorphic types constructed from the evaluation of expressions on database contents? If expressions are not reversible, query evaluation will require brute force mapping of all features followed by searching. DECISION -Geoserver issue liaison Alistair and Ben.
  23. By the way, do you still want functions in filter queries? If so, it would be good to have a test data set including function definitions. DECISION - Geoserver issue liaison Alistair and Ben.Query functions not needed in Geoserver when its supports WFS2/FES2.

-- SteveRichard - 2009-08-05

Issues 15,16, 17, 18 added by JeanJacquesSerrano - 2009-09-15

Issues 19,20,21,22,23 added by TimDuffy on behalf of Ben Caradoc-Davies of CSIRO Geoserver development - 2009-09-16

Concerning Tim's proposal that we need to seriously consider - or at least be aware they are coming - ISO (but available by 'joint ISO/OGC development' agreement from the OGC portal - go to and - you will need OGC portal passsword- for the 'Final Text versions of the ISO/CD 19142 and 19143' standards) WFS 2.0 and FES 2.0 standards that may fit with a GeoSciML that is based on GML 3.2.1 and may define much better (for testbeds, for stable implementation) how we design our GeoSciML and setup and query our GeoSciML v3 services, Eddie Curtis of Snowflake software has offered for public view some initial thoughts on these standards and how they might be implemented from the point of view of a single commercial WFS supplier (and the Geoserver community also I have got looking at this with Ben's help - but have so far only heard a little from CSIRO/Auscope on this):

"In general the WFS2.0 is more complex than previous versions. It is a broader interface that is trying to cover a wider range of patterns of use. In general broad interfaces are a bad thing - they make implementation complex. A set of more targeted and narrower interfaces is usually better. There are quite a lot of optional operations in the interface. This allows for a wide variation in capability between compliant services and therefore makes interoperability harder to achieve.

The distinction between simple and basic WFS is interesting. Simple is very simple and so should help adoption at that level. However, basic WFS has become pretty complex.

Here are some comments operation by operation for the WFS spec. As yet I haven’t gone into so much detail on the FES specification yet.

Get Capabilities:

Largely the same as 1.1 so should be a matter of routine development work to implement.


I don’t think this has changed so there should be no issues there.


This seems like a sensible replacement for the getObject operation and should be a routine piece of development to implement. GetFeature Need to change to FES 2.0 and to account for the namespace changes that go with it. (The temporal operators currently implemented on FES 1.1 ignore namespaces). Need to check if there are any new mandatory operations within FES 2.0

Resolve References (local):

There may be problem with this part of the specification. It appear that the correct behaviour is that when a link is resolved the target object is encoded in-line instead of using an xlink. This could lead to a situation where the same object is referenced several times and therefore supplied in-line within several features in the return set. This will result in invalid XML since the gml:id will be repeated in each copy of the object and will therefore not be unique within the XML file. (This is exactly the same problem that we had with duplicate IDs in GeoSciML). This could put the server in a position where it either has to ignore the resolve references settings or return invalid XML.

Where references are cyclic the specification is clear that the already supplied object is not re-supplied and nested inside itself - an xlink is used to refer back up the XML tree. Our current solution to the above problem is similar i.e. if the object has been supplied once then xlink to it instead of repeating it.

The resolve paths identify a chain of xlink references. GML associations provide the option of encoding the referenced object in-line or encoding an xlink reference to the object. GML references, however, provide only the option to use an xlink. Resolve paths can only be followed through GML associations and not through GML references. The reason for this is that xlinks are un-typed i.e. the xlink is a reference to any kind of web resource and the type of the referenced element is not declared in the schema. This makes it impossible to determine a valid path (beyond the first step) from the schema since there is no way of telling, by examining the schema, what elements may come in the path after the xlink is resolved. This is a problem for the client since it must have knowledge of the returned data model above and beyond anything returned by the getCapabilities or describeFeatureType operations.

Resolve References (remote):

It would be very difficult to support the resolution of remove references in a WFS with any level of performance or scalability. We would therefore be reluctant to implement this feature.

Resolving references locally can use a "back door" approach to resolution which makes use of database structure i.e. table joins. Resolving remove references is effectively trying to create a join to a table in a remote database. It becomes impossible to use any database technologies (i.e. joins on indexed columns) to make this process work efficiently.

Response times to queries would necesserily be very slow since a response would have to wait for queries to remote WFS to return. This could be a very large number of separate remote requests since each remote resource would have to be resolved individually. Alternatively the WFS could "save up" remote requests and batch them, but this would rule out streaming of results - again hitting scalability badly since whole result sets would have to be kept in memory before returning them to the client. Because response times will be long the service instance will be occupied for much longer and so the number of concurrent requests which can be handled will be low.

It also strikes me that this is to some extent a competitor to service chaining. Would it not be better to use service chaining to coordinate requests to multiple WFSs? By allowing chaining within the WFS the WFS spec then has to deal with all the failure and exception cases which service chaining already handles.

Stored Queries:

The bare minimum is to support select feature by ID (plus the describe and list operations). This should be easy to implement.

Manage Stored Queries:

This is all new, but managing stored queries should be straightforward.


The schema-element() function seems to allow for polymorphic query. e.g. a query can be placed against "gml:AbstractFeature" and all concrete feature types would be queried as a result. This looks feasible but there may be cases that become complex which I haven’t spotted yet.


The specification states that the server will keep a result set in cache in case a client asks for further pages and that this cache may be cleared after a timeout. This makes the server stateful.

Stateful services are inherently less scalable than stateless ones. In a stateless service all service instances are identical and so a client can place a request with any instance. In a stateful service the service instances have different, client specific states and so a client must send a whole series of requests to the same service instance. This requires many more service instances to be available and each instance takes up more machine resources holding the client state.

Previous versions of WFS are stateless services and therefore can be made to perform and scale.

Feature versions:

This is a hard one to assess since it depends a good deal on the underlying database and how it manages versions. In our product set we don’t dictate the underlying storage model so would need specific use cases and databases as examples before we can really design this functionality.

The additional object container will give us a bit of development work to do but it is certainly feasible. I have also had a chance to look at FES2.0 – the degree of change is much smaller than the WFS spec and there is nothing to cause concern there. The most complicated thing is the new matchAction parameter which allows the client to specify the semantics of a match operator when there are multiple values for the property. This plugs a serious hole in the previous version, where the semantics of the operator were ambiguous for multiple properties, but the ability of the client to choose from a range of semantics effectively increases the number of operators in the interface and would give us a chunk of development work to carry out."


Topic attachments
I Attachment Action Size Date Who Comment
BoreholeCount.javajava manage 0.7 K 29 Jul 2008 - 17:47 ChristianBellier The java class excuting the processing ot the WPS service counting the number of boreholes. Services on boreholes developped by BRGM use the java borehole parser to decode borehole data.
BoreholeCountProxy.javajava manage 3.7 K 29 Jul 2008 - 15:44 ChristianBellier Java code to call the WPS service counting the number of boreholes from a GeoSciML boreholes URL
Main.javajava manage 0.4 K 29 Jul 2008 - 15:45 ChristianBellier The java main program using the service for counting the number of boreholes
Topic revision: r45 - 07 Apr 2011, MarcusSen

Current license: All material on this collaboration platform is licensed under a Creative Commons Attribution 3.0 Australia Licence (CC BY 3.0).