In this session, Mike Flasko demonstrated the new ADO.NET data services framework that enables developers to create services that expose data over a REST interface using industry standard formats and semantics such as JSON and AtomPub.
The reason I attended this session was to find out what this was all about because on the face of it, the idea of a data services framework worries me for a variety of reasons, including;
- Why would I ever expose my entire data model over a REST interface?
- Surely doing so breaks separation of concerns? My clients should be invoking a service or REST interface that returns targeted and focused results based on a strict request/response model.
- Isn't the data services framework tied to LINQ to Entities?
- Opening up an interface direct to my data sounds extremely dangerous and opens it to all manner of abuses!
- What if I have business logic executed when I manipulate data within my service - using data services this can't be expressed!
Taking each in turn;
Why would I ever expose my entire data model over REST?
Quite simply, data is what drives much of web 2.0 - it drives mash-ups and makes data driven AJAX, flash and Silverlight applications possible.
Normally we may expose this data through a bunch of services (possibly REST) with tightly defined semantics such as GetCustomers, GetCustomer(1) and so on. The data services team suggest however that as the application complexity and size increases, managing these interfaces can become cumbersome and tedious and may be serviced better by interfacing directly to the data via a specific data service.
These data services are exposed as REST, allowing access to your data model by navigable URL. For example, lets say you have a data model of People with Contact telephone numbers, you could access this data with the following HTTP conventions;
To get a list of people you would execute a standard GET verb request against the resource URL which might be something like: http://yourdomain.com/yourservice.svc/data/people
To filter people, you would pass over parameters to the URL, perhaps like this: http://yourdomain.com/yourservice.svc/data/people?$filter=A$field=Name
Whilst to get a specific person with an ID of 12, you might look at http://yourdomain.com/yourservice.svc/data/people(12)
From here you can get a list of that persons contact details: http://yourdomain.com/yourservice.svc/data/people(12)/contacts
To make changes to the data, you simply change the HTTP verb. Eg: to update a record, you would POST it, to insert, you would PUT and to remove you would DELETE (standard rest interface), and the results would be returned back in the format requested in the header (ie: you can set the accept type header value to specify that you want atom or JSON etc) - the outcome of any given action would also have a status response, which again map to the standard HTTP status codes - (eg: people(12) where no person exists with ID 12 would generate a 404 not found status error).
When data is returned it also has relative navigable references to other parts of the data model. Again this is a common REST feature which describes not only the data that is being returned, but also how to navigate it by invoking the service with more specific information.
On the client side, you are able to query against your data service using a LINQ query, offering a powerful and intelligent way to request the data you want from the web service, but this is also one of my concerns.
Isn't this breaking the principle of separation of concerns?
I believe it is because your calling code is tightly coupled to the semantics of your entity model, rather than being coupled only to the DTO that it is expecting to be returned. This might be a problem in some situations, but in others it may be perfectly fine, so there isn't a hard and fast rule here.
Isn't the data services framework tied to LINQ to Entities?
My next concern was the reliance of the data services framework on LINQ to entities, of which I'm not a big fan. I'm led to believe however that this isn't true. Apparently, your data service is able to expose any object model in any way you see fit, but in a limited 75 minute session, this wasn't covered in any detail. I'm assuming though that this will at least require that the context of your data (eg, an NHibernate session) implement the IQueryable interface to allow LINQ queries to run against it. Again, I'll reserve judgement on this until I've seen an implementation of a data service that isn't exposing a LINQ to entities model.
Opening up an interface direct to my data sounds extremely dangerous and opens it to all manner of abuses!
In terms of security however, my concerns were unfounded. One of the things I really liked about data services was the ability to lock down your data with any rule you can code. You use standard authentication and authorization mechanisms (this is just HTTP after all) to govern, with fine granularity, access permissions at entity level, row level and field level - this will please SPROC aficionado's who often argue they don't use OR/M technologies because they lose row level security on their data.
Finally, the business rules and logic issue is still perfectly valid. If you need to carry out rules processing during invokation of operations on your model, then a conventional approach would be more suitable as you don't get this opportunity under data services. When a request arrives to manipulate data - it is surfaced straight into the data services framework and executes against your data.
Overall I thought the technology was interesting and could see uses for it in numerous areas. In enterprise applications with distinct layers and business rules however I'm not sure it's that applicable, and I'd also like to see it surface a non-L2E model.