"Seegrid will be due for a migration to confluence on the 1st of August. Any update on or after the 1st of August will NOT be migrated"

This is a general discussion on the technical aspects of the MCA AusIndustry SEE Grid Information Services Roadmap project.

This is a chronological discussion area. Any conclusions reached or reference material should be removed from this page and given a dedicated topic of its own.

Jump in...

-- RobertWoodcock - 01 Jul 2004

To that end I'm adding a page on the Geoserver extensions/mapping issues.


-- StuartGirvan - 27 Sep 2004

And here's another one that explains the broad architecture that GA used to deploy Geoserver.


-- StuartGirvan - 11 Nov 2004

Some queries for clarification:

1. who is responsible for mapping the actual table contents to the XMML examples? (my assumption is this is the data custodian?) Yes - the data custodian must be assumed to be the authority on the semantics of their schema. Of course the public (XMML) schema must also be properly documented, so there may some liaison required in the early days between the data custodians and the XMML designers - that is the phase we are in now.
2. which of the XMML patterns is actually going to be used for each case? See MCAGeochemistryExamples. There is also the issue of (1) client- vs server-side processing - see discussion on MCAProjectScenarios, (2) the interaction between the encoding and the WFS query - see StrongWeakTyping, and (3) the encoding and re-usable stylesheet components - see below
3. who is going to create report stylesheets for the sample data (definitely a domain expert)? - need HTML and SVG, and possibly also transformations into related types for download The custodians and the end-users should be broadly responsible for design - this is UI stuff, so should follow a usability analysis. See MCAStyleSheets
4. who and how are we going to define the queries on the WFS and what will be done? Need input from custodians and end-users. The storyboard will focus this - see MCAProjectScenarios.
5. Which parts of this are phase 1c vs phase 1b that we (i.e. the domain experts) really need to work out to finish the 1b data sample preparations?  

Re point #2, the object serialisation patterns seem to be highly variable and potentially hairy to handle. From the demonstrator pespective, we expect XSL to be supplied to display data according to needs of domain users, so we should be reasonably agnostic. Its worth noting however: for general GIS system it would be useful to be able to make sure that for a given purpose a mappable geometry is a property of the WFS response feature type, not of a related object referenced in the same document, inlined in some arbitrary level of nesting or referenced via an external link. I cant see many clients coping with that in an interactive environment.

I also expect Simon and I will need to burn some coffee together soon to explore general patterns for these stylesheets so that domain experts can focus on the final presentation, not the common business logic of finding the right pieces to display.

-- RobAtkinson - 01 Jul 2004

WA Perspective

Paul Morris' (GSWA) response to Simon's comments:

"My guess is that for "basic data inspection" the users usually want to see either

  • all the analyte results for a location - i.e. the GeochemSample view ..."

[MORRIS, Paul] This is in part true - i.e. to see the analytical results for a sample or samples at a location. This may include data for more than one analysis of a single sample at a location
  • "the distribution of an analyte over the area of interest - i.e. the PointSetCoverage view"

[MORRIS, Paul] True (e.g. Au distribution over an area)

"BUT, we also suspect that when they start refining their queries, this sometimes leads them to ask
  • more details about the analytical process - i.e. the normalised Procedure/Specimen/Measurement view."

[MORRIS, Paul] I'm not sure what this means, but if 'refining' means users want to know more about the actual analytical result (e.g. sample medium, preparation, analytical method, precision, accuracy) then that's correct - it's now increasingly common to compare analytical data according to method of analysis, and VERY common to compare data using a single sample medium. For example, it's almost worthless comparing Au data by colorimetry with Au data by fire assay/AAS.

"So in the end, the most primitive, fully normalised, view is more or less always required."

[MORRIS, Paul] If this means the most comprehensive compilation of data, then that's correct. In DataShed, mandatory data include:
  • unit
  • lower detection level
  • analytical method

-- StephenBandy (GSWA) 20 Jul 2004

OK - so if these "mandatory" items were provided in the de-normalised version, then would most users be satisfied by the latter?

What I'm really trying to figure out is whether we should aim to
  • minimise the number of to-and-fro's between client and server - this would be the case if the data were served in the "primitive" highly normalised form, and then formatted for inspection in tabular or graphical form on the client side, or
  • minimise the transformations required on the client side by sending a summary view from the server first, and then expect the client to iterate with requests for more detail - this requires that enough information is attached to the summary fields to allow retrieval of the "metadata" for the selected values.

This decision is expected to have a huge impact on the relative demands made on the client and server.

-- SimonCox - 21 Jul 2004

SA Perspective

In-house discussions have clearly shown that maximum query capabilities are desirable in a production version. However, we suggest that the demonstrator case be kept as simple as possible in order to maximise the chances of successfully resolving the issues which are bound to arise during implementation.

For the demonstrator, we would consider combinations of the following query types sufficient for test/demo purposes:
  • Spatial query by lat/long rectangle
  • Attributes by sample identifier
  • Attributes by analyte + value

Extension beyond these may lead to significant delays at our end.

Refer to GeochemExamplesSA for comments on data fields.

-- GregoryJenkins - 29 Jul 2004

Let me make sure that I understand these queries correctly: the requirement is for reports to contain descriptions of samples selected by

  • location
  • the identifier assigned to the sample within the Dept catalogue (see note)
  • the values of certain analytes exceeding some threshold

(note: in GML/XMML external identifiers are recorded as the sample "name"; the value of the gml:id may or may not have a resemblance to this, depending on the policy of the WFS)

-- SimonCox - 30 Jul 2004

Demonstrator portal capabilities

These may be extended as needed, however there is sufficient built in flexibility in the build installed to deal with most of the basic issues we now face.

Summary of relevant capabilities Links to resources Planned
Location data (single point geometry) can be mapped as interactive points (using an URL sourced image as a glyph) - as per the Geochem 1 & 2 demos installed. Clicking on a point will run a generic stylesheet over the result set to extract the feature(s) with matching "fid" fields. (Note this stylesheet currently supports GML 2 and a pre-GML pattern ('csgidata') used by [[http:canri.nsw.gov.au] CANRI ] ) http://cgsrv3.arrc.csiro.au/wmc/system/widgets/info/xsl/feature.xsl Generic Stylesheet to be updated to support GML 3 (id) , possibly also other identifiable patterns
If the selected feature type is known, a specific stylesheet may be registered against that feature type to customise the report   A web interface to this facility is being created
A map layer may have a SVG stylesheet specified. This is used to render the layer. The stylesheet must conform to a simple Parameter Set to support geolocation. Simple generic: http://cgsrv3.arrc.csiro.au/wmc/user/stylesheets/egsvg.xsl
Ordnance survey example: http://cgsrv3.arrc.csiro.au/wmc/user/stylesheets/os_gml2svg.xsl
Write up UI interactivity API for SVG stylesheets

-- RobAtkinson - 10 Aug 2004 -- SimonCox - 12 Aug 2004

Binding Strategies

As was known at project commencement, there is an issue with the relationship between valid queries to a Web Feature Server and the returned objects. WFS assumes that any feature property in the output schema can be used to construct filters. In practive we find that this is not good enough:
  • you may wish to query a related feature - find me boreholes where labX was used to measure Au
  • some queries may be very expensive or even impossible to support (e.g. if features are derived from a model)

Web Map Composer is a fairly sophisticated WFS client - it uses the results of DescribeFeatureType operations to provide a query building interface. We need to extend this paradigm and this discussion proposes a way forward:

Current supported options
  1. Interrogate schema via DescribeFeatureType and allow the user to enter single values for scalar properties found
  2. Create a filter as a set of scalar property constraints within a "map context document". In this case, the properties can be different from those advertised by the WFS - this is not checked at run time.
  3. Override filters with a literal document constructed by a customised widget - we use this for searching via arbitrary web services interfaces for example - doesnt even need to be a WFS

Do we need a more powerful way to configure filters at riun-time? It seems so: the options seem to be:

  1. Create special UI widgets able to manipulate the service filter directly (write filter)
  2. Create a global (registered) mapping of input schema documents to output schema types
  3. Allow such a mapping to be done per map layer
  4. Support a more sophisticated parameterised binding template that supports a "wizard" like dialogue(content as well as structure) - the state management tools for this exist but the interface is not developed.

-- RobAtkinson - 14 Oct 2004

Address for PIRSA's wfs:


-- StuartGirvan - 29 Nov 2004
Topic revision: r18 - 15 Oct 2010, UnknownUser

Current license: All material on this collaboration platform is licensed under a Creative Commons Attribution 3.0 Australia Licence (CC BY 3.0).