diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model-1.png b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model-1.png deleted file mode 100644 index 1bb373447d7..00000000000 Binary files a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model-1.png and /dev/null differ diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model.html deleted file mode 100644 index 9ef49c39f64..00000000000 --- a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_concurrency_model.html +++ /dev/null @@ -1,432 +0,0 @@ - - - - - DSF Concurrency Model - - -

DSF Concurrency Model

-

-

-

Version -1.0
-Pawel Piech
-© 2006, Wind River Systems.  Release -under EPL version 1.0.

-

Introduction

-Providing a solution to concurrency problems is the primary design goal -of DSF.  To that end DSF imposes a rather draconian -restriction on services that use it: 1) -All service interface methods must be called using a single designated -dispatch thread, unless explicitly stated otherwise, 2) The dispatch -thread should never be used to make a blocking call (a call that waits -on I/O or a call that makes a long-running computation).  What -the first restriction effectively means, is that the dispatch thread -becomes a global "lock" that all DSF services in a given session -share with each other, and which controls access to most of services' -shared data.  It's important to note that multi-threading is still allowed -within individual service implementation. but when crossing the service -interface boundaries, only the dispatch thread can be used.  The -second restriction just ensures that the performance of the whole -system is not killed by one service that needs to read a huge file over -the network.  Another way of looking at it is that the -service implementations practice co-operative multi-threading using the -single dispatch thread.
-
-There are a couple of obvious side effects that result from this rule:
-
    -
  1. When executing within the dispatch thread, the state of the -services is guaranteed not to change.  This means that -thread-defensive programming techniques, such as making duplicates of -lists before iterating over them, are not necessary.  Also it's -possible to implement much more complicated logic which polls the state -of many objects, without the worry about dead-locks.
  2. -
  3. Whenever a blocking operation needs to be performed, it must be -done using an asynchronous method.  By the time the operation is -completed, and the caller regains the dispatch thread, this caller may -need to retest the relevant state of the system, because it could -change completely while the asynchronous operation was executing.
  4. -
-

The Mechanics

-

java.util.concurrent.ExecutorService
-

-DSF builds on the vast array of tools added in Java 5.0's -java.util.concurrent package (see http://java.sun.com/j2se/1.5.0/docs/guide/concurrency/index.html -for details), where the most important is the ExecutorService -interface.  ExecutorService -is a formal interface for submitting Runnable objects that will be -executed according to executor's rules, which could be to execute the -Runnable immediately, -within a thread pool, using a display thread, -etc.  For DSF, the main rule for executors is that they have -to use a single thread to execute the runnable and that the runnables -be executed in the order that they were submitted.  To give the -DSF clients and services a method for checking whether they are -being called on the dispatch thread, we extended the ExecutorService -interface as such:
-
public interface DsfExecutor extends ScheduledExecutorService
{
/**
* Checks if the thread that this method is called in is the same as the
* executor's dispatch thread.
* @return true if in DSF executor's dispatch thread
*/
public boolean isInExecutorThread();
}
-

java.lang.concurrent.Future -vs org.eclipse.dd.dsf.concurrent.Done

-The Done object -encapsulates the return value of an asynchronous call in DSF.  It -is actually merely a Runnable with -an attached org.eclipse.core.runtime.IStatus -object , but it can be extended by the services or clients to hold -whatever additional data is needed.   Typical pattern in how -the Done object is used, -is as follows:
-
Service:
public class Service {
void asyncMethod(Done done) {
new Job() {
public void run() {
// perform calculation
...
done.setStatus(new Status(IStatus.ERROR, ...));
fExecutor.execute(done);
}
}.schedule();
}
}

Client:
...
Service service = new Service();
final String clientData = "xyz";
...
service.asynMethod(new Done() {
public void run() {
if (getStatus().isOK()) {
// Handle return data
...
} else {
// Handle error
...
}
}
}
-The service performs the asynchronous operation a background thread, -but -it can still submit the Done runnable -with the executor.  In other words, the Done and other runnables can be -submitted from any thread, but will always execute in the single -dispatch thread.  Also if the implementation of the asyncMethod() is non-blocking, -it does not need to start a job, it could just perform the operation in -the dispatch thread.  On the client side, care has to be taken to -save appropriate state before the asynchronous method is called, -because by the time the Done is -executed, the client state may change.
-
-The java.lang.concurrent -package -doesn't already have a Done, -because the generic concurrent -package is geared more towards large thread pools, where clients submit -tasks to be run in a style similar to Eclipse's Jobs, rather than using -the single dispatch thread model of DSF.  To this end, the -concurrent package does have an equivalent object, Future.  -Future has methods that -allows the client to call the get() -method, and block while waiting for a result, and for this reason it -cannot -be used from the dispatch thread.  But it can be used, in a -limited way, by clients which are running on background thread that -still -need to retrieve data from synchronous -DSF methods.  In this case the code might look like the -following:
-
Service:
public class Service {
int syncMethod() {
// perform calculation
...
return result;
}
}

Client:
...
DsfExecutor executor = new DsfExecutor();
final Service service = new Service(executor);
Future<Integer> future = executor.submit(new Callable<Integer>() {
Integer call() {
return service.syncMethod();
}
});
int result = future.get();
-The biggest drawback to using Future -with DSF services, is that it does not work with -asynchronous methods.  This is because the Callable.call() -implementation -has to return a value within a single dispatch cycle.  To get -around this, DSF has an additional object called DsfQuery, which works like a Future combined with a Callable, but allows the -implementation to make multiple dispatches before setting the return -value to the client.  The DsfQuery object works as follows:
-
-
    -
  1. Client creates the query object with its own implementation of DsfQuery.execute().
    -
  2. -
  3. Client calls the DsfQuery.get() -method on non-dispatch thread, and blocks.
  4. -
  5. The query is queued with the executor, and eventually the DsfQuery.execute() method is -called on the dispatch thread.
  6. -
  7. The query DsfQuery.execute() -calls synchronous and asynchronous methods that are needed to do its -job.
  8. -
  9. The query code calls DsfQuery.done() -method with the result.
  10. -
  11. The DsfQuery.get() -method un-blocks and returns the result to the client.
    -
  12. -
-

Slow -Data Provider Example

-The point of DSF concurrency can be most easily explained through -a practical example.  Suppose there is a viewer which needs to -show data that originates from a remote "provider".  There is a -considerable delay in transmitting the data to and from the provider, -and some delay in processing the data.  The viewer is a -lazy-loading table, which means that it request information only about -items that are visible on the screen, and as the table is scrolled, new -requests for data are generated.  The diagram below illustrates -the -logical relationship between components:
-
-.
-

In detail, these components look like this:

-

-Table Viewer
-

The table viewer is the standard -org.eclipse.jface.viewers.TableViewer, -created with SWT.VIRTUAL -flag.  It has an associated content -provider, SlowDataProviderContentProvider) which handles all the -interactions with the data provider.  The lazy content provider -operates in a very simple cycle:

-
    -
  1. Table viewer tells content provider that the input has changed by -calling IContentProvider.inputChanged().  -This means that the content provider has to query initial state of the -data.
  2. -
  3. Next the content provider tells the viewer how many elements -there are, by calling TableViewer.setItemCount().
  4. -
  5. At this point, the table resizes, and it requests data values for -items that are visible.  So for each visible item it calls: ILazyContentProvider.updateElement().
  6. -
  7. After calculating the value, the content provider tells the table -what the value is, by calling TableViewer.replace().
  8. -
  9. If the data ever changes, the content provider tells the table to -rerequest the data, by calling TableViewer.clear().
  10. -
-Table viewer operates in the -SWT display thread, which means that the content provider must switch -from the display thread to the DSF dispatch thread, whenever it is -called by the table viewer, as in the example below:
-
    public void updateElement(final int index) {
assert fTableViewer != null;
if (fDataProvider == null) return;

fDataProvider.getExecutor().execute(
new Runnable() { public void run() {
// Must check again, in case disposed while redispatching.
if (fDataProvider == null) return;

queryItemData(index);
}});
}
-Likewise, when the content provider calls the table viewer, it also has -to switch back into the display thread as in following example, when -the content provider receives an event from the data provider, that an -item value has changed.
-
    public void dataChanged(final Set<Integer> indexes) {
// Check for dispose.
if (fDataProvider == null) return;

// Clear changed items in table viewer.
if (fTableViewer != null) {
final TableViewer tableViewer = fTableViewer;
tableViewer.getTable().getDisplay().asyncExec(
new Runnable() { public void run() {
// Check again if table wasn't disposed when
// switching to the display thread.
if (tableViewer.getTable().isDisposed()) return; // disposed
for (Integer index : indexes) {
tableViewer.clear(index);
}
}});
}
}
-All of this switching back and forth between threads makes the code -look a lot more complicated than it really is, and it takes some -getting used to, but this is the price to be paid for multi-threading. -Whether the participants use semaphores or the dispatch thread, the -logic is equally complicated, and we believe that using a single -dispatch thread, makes the synchronization very explicit and thus less -error-prone.
-

Data Provider Service

-

The data provider service interface, DataProvider, is very similar -to that of the lazy content provider.  It has methods to:

- -But this is a DSF interface, and all methods must be called on the -service's dispatch thread.  For this reason, the DataProvider interface returns -an instance of DsfExecutor, -which must be used with the interface.
-

Slow Data Provider

-

The data provider is actually implemented as a thread which is an -inner class of SlowDataProvider -service.  The provider thread -communicates with the service by reading Request objects from a shared -queue, and by posting Runnable objects directly to the DsfExecutor but -with a simulated transmission delay.  Separately, an additional -flag is also used to control the shutdown of the provider thread.

-To simulate a real back end, the data provider randomly invalidates a -set of items and notifies the listeners to update themselves.  It -also periodically invalidates the whole table and forces the clients to -requery all items.
-

Data and Control Flow
-

-This can be described in following steps:
-
    -
  1. The table viewer requests data for an item at a given index (SlowDataProviderContentProvider.updateElement).
    -
  2. -
  3. The table viewer's content provider executes a Runnable in the DSF -dispatch thread and calls the data provider interface (SlowDataProviderContentProvider.queryItemData).
  4. -
  5. Data provider service creates a Request object, and files it in a -queue (SlowDataProvider.getItem).
  6. -
  7. Data provider thread de-queues the Request object and acts on it, -calculating the value (ProviderThread.processItemRequest).
  8. -
  9. Data provider thread schedules the calculation result to be -posted with DSF executor (SlowDataProvider.java:185).
  10. -
  11. The Done callback sets the result data in the table viewer (SlowDataProviderContentProvider.java:167).
    -
  12. -
-

Running the example and full sources

-This example is implemented in the org.eclipse.dd.dsf.examples -plugin, in the org.eclipse.dd.dsf.examples.concurrent -package. 
-
-To run the example:
-
    -
  1. Build the test plugin (along with the org.eclipse.dsdp.DSF plugin) -and launch the PDE. 
    -
  2. -
  3. Make sure to add the DSF -Tests action set to your current perspective.
  4. -
  5. From the main menu, select DSF -Tests -> Slow Data Provider.
  6. -
  7. A dialog will open and after a delay it will populate with data.
  8. -
  9. Scroll and resize dialog and observe the update behavior.
  10. -
-

Initial Notes
-

-This example is supposed to be representative of a typical embedded -debugger design problem.  Embedded debuggers are often slow in -retrieving and processing data, and can sometimes be accessed through a -relatively slow data channel, such as serial port or JTAG -connection.  But as such, this basic example presents a couple -of major usability problems
-
    -
  1. The data provider service interface mirrors the table's content -provider interface, in that it has a method to retrieve a single piece -of data at a time.  The result of this is visible to the user as -lines of data are filled in one-by-one in the table.  However, -most debugger back ends are in fact capable of retrieving data in -batches and are much more efficient at it than retrieving data items -one-by-one.
  2. -
  3. When scrolling quickly through the table, the requests are -generated by the table viewer for items which are quickly scrolled out -of view, but the service still queues them up and calculates them in -the order they were received.  As a result, it takes a very long -time for the table to be populated with data at the location where the -user is looking. 
    -
  4. -
-These two problems are very common in creating UI for embedded -debugging, and there are common patterns which can be used to solve -these problems in DSF services.
-

Coalescing

-Coalescing many single-item requests into fewer multi-item requests is -the surest way to improve performance in communication with a remote -debugger, although it's not necessarily the simplest.  There are -two basic patterns in which coalescing is achieved:
-
    -
  1. The back end provides an interface for retrieving data in large -chunks.  So when the service implementation receives a request for -a single item, it retrieves a whole chunk of data, returns the single -item, and stores the rest of the data in a local cache.
  2. -
  3. The back end providers an interface for retrieving data in -variable size chunks.  When the service implementation receives a -request for a single item, it buffers the request, and waits for other -requests to come in.  After a delay, the service clears the buffer -and submits a request for the combined items to the data provider.
  4. -
-In practice, a combination of the two patterns is needed, but for -purpose of an example, we implemented the second pattern in the -"Input-Coalescing Slow Data Provider" (InputCoalescingSlowDataProvider.java).  -
-

Input Buffer

-

The main feature of this pattern is a buffer for holding the -requests before sending them to the data provider.  In this -example the user requests are buffered in two arrays: fGetItemIndexesBuffer and fGetItemDonesBuffer.  The -DataProvider.getItem() -implementation is changed as follows:

-
    public void getItem(final int index, final GetDataDone<String> done) {
// Schedule a buffer-servicing call, if one is needed.
if (fGetItemIndexesBuffer.isEmpty()) {
fExecutor.schedule(
new Runnable() { public void run() {
fileBufferedRequests();
}},
COALESCING_DELAY_TIME,
TimeUnit.MILLISECONDS);
}

// Add the call data to the buffer.
// Note: it doesn't matter that the items were added to the buffer
// after the buffer-servicing request was scheduled. This is because
// the buffers are guaranteed not to be modified until this dispatch
// cycle is over.
fGetItemIndexesBuffer.add(index);
fGetItemDonesBuffer.add(done);
}

-And method that services the buffer looks like this:
-
    public void fileBufferedRequests() { 
// Remove a number of getItem() calls from the buffer, and combine them
// into a request.
int numToCoalesce = Math.min(fGetItemIndexesBuffer.size(), COALESCING_COUNT_LIMIT);
final ItemRequest request = new ItemRequest(new Integer[numToCoalesce], new GetDataDone[numToCoalesce]);
for (int i = 0; i < numToCoalesce; i++) {
request.fIndexes[i] = fGetItemIndexesBuffer.remove(0);
request.fDones[i] = fGetItemDonesBuffer.remove(0);
}

// Queue the coalesced request, with the appropriate transmission delay.
fQueue.add(request);

// If there are still calls left in the buffer, execute another
// buffer-servicing call, but without any delay.
if (!fGetItemIndexesBuffer.isEmpty()) {
fExecutor.execute(new Runnable() { public void run() {
fileBufferedRequests();
}});
}
}
-The most interesting feature of this implementation is the fact that -there are no semaphores anywhere to control access to the input -buffers.  Even though the buffers are serviced with a delay and -multiple clients can call the getItem() -method, the use of a single -dispatch thread prevents any race conditions that could corrupt the -buffer data.  In real-world implementations, the buffers and -caches that need to be used are far more sophisticated with much more -complicated logic, and this is where managing access to them using the -dispatch thread is ever more important.
-

Cancellability

-

Table Viewer

-

-Unlike coalescing, which can be implemented entirely within the -service, cancellability requires that the client be modified as well -to take advantage of this capability.  For the table viewer -content provider, this means that additional features have to be -added.  In CancellingSlowDataProviderContentProvider.java -ILazyContentProvider.updateElement() -was changes as follows:
-
    public void updateElement(final int index) {
assert fTableViewer != null;
if (fDataProvider == null) return;

// Calculate the visible index range.
final int topIdx = fTableViewer.getTable().getTopIndex();
final int botIdx = topIdx + getVisibleItemCount(topIdx);

fCancelCallsPending.incrementAndGet();
fDataProvider.getExecutor().execute(
new Runnable() { public void run() {
// Must check again, in case disposed while redispatching.
if (fDataProvider == null || fTableViewer.getTable().isDisposed()) return;
if (index >= topIdx && index <= botIdx) {
queryItemData(index);
}
cancelStaleRequests(topIdx, botIdx);
}});
}
-Now the client keeps track of the requests it made to the service in fItemDataDones, and above, cancelStaleRequests() iterates -through all the outstanding requests and cancels the ones that are no -longer in the visible range.
-

Data Provider Service

-

-

The data provider implementation -(CancellableInputCoalescingSlowDataProvider.java), -builds on top of the -coalescing data provider.  To make the canceling feature useful, -the data provider service has to limit the size of the request -queue.  This is because in this example which simulates -communication with a target and once requests are filed into the -request -queue, they cannot be canceled, just like a client can't cancel -request once it sends them over a socket.  So instead, if a flood -of getItem() -calls comes in, the service has to hold most of them in the coalescing -buffer in case the client decides to cancel them.  Therefore the -fileBufferedRequests() -method includes a simple check before servicing -the buffer, and if the request queue is full, the buffer servicing call -is delayed.

-
        if (fQueue.size() >= REQUEST_QUEUE_SIZE_LIMIT) {
if (fGetItemIndexesBuffer.isEmpty()) {
fExecutor.schedule(
new Runnable() { public void run() {
fileBufferedRequests();
}},
REQUEST_BUFFER_FULL_RETRY_DELAY,
TimeUnit.MILLISECONDS);
}
return;
}
-Beyond this change, the only other significant change is that before -the requests are queued, they are checked for cancellation.
-

Final Notes
-

-The example given here is fairly simplistic, and chances are that the -same example could be implemented using semaphores and free threading -with perhaps fewer lines of code.  But what we have found is that -as the problem gets bigger, the amount of -features in the data provider increases, the state of the -communication protocol gets more complicated, and the number of modules -needed in the service layer increases, using free threading and -semaphores does not safely scale.  Using a dispatch thread for -synchronization certainly doesn't make the inherent problems of the -system less complicated, but it does help eliminate the race conditions -and deadlocks from the overall system.
-

Coalescing and Cancellability are both optimizations.  Neither -of these optimizations affected the original interface of the service, -and one of them only needed a service-side modification.  But as -with all optimizations, it is often better to first make sure that the -whole system is working correctly and then add optimizations where they -can make the biggest difference in user experience. 

-

The above examples of optimizations can take many forms, and as -mentioned with coalescing, caching data that is retrieved from the data -provider is the most common form of data coalescing.  For -cancellation, many services in DSF build on top of other services, -which means that even a low-level service can cause a higher -level service to retrieve data, while another event might cause it to -cancel those requests.  The perfect example of this is a Variables -service, which is responsible for calculating the value of expressions -shown in the Variables view.  The Variables service reacts to the -Run Control service, which issues a suspended event and then requests a -set of variables to be evaluated by the debugger back end.  But as -soon as a resumed event is issued by Run Control, the Variables service -needs to cancel  the pending evaluation requests.
-

-
-
- - diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_data_model.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_data_model.html deleted file mode 100644 index bd1b40112e6..00000000000 --- a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_data_model.html +++ /dev/null @@ -1,286 +0,0 @@ - - - - - DSF Data Model - - -

DSF Data Model

-Version -1.0
-Pawel Piech
-© 2006, Wind River Systems.  Release -under EPL version 1.0.
-

Overview

-

The data model aspect of DSF is only partially complete as compared -to the Concurrency and Services Models.  The goals for its design -are:
-

-
    -
  1. Separate the structure of the -data in the services from the model used for presentation in views.  -This seems like a basic model-viewer separation, which is something -that we theoretically have already.  But in reality the current -platform debug model APIs closely correspond to how the data is -laid out in debug views, and even with the flexible hierarchy views it -is -difficult to provide alternative layouts.
  2. -
  3. Allow for a modular -implementation of services that contribute to the data model.   -
    -
  4. -
  5. Perform well with large -data sets.
  6. -
  7. Make the data model interfaces -convenient to use by other services as well as by views.  -Some interim designs of DSF data model APIs were very well suited for -populating views (though asynchronous) content and label provider, but -were very difficult to use for other purposes, such as by another -service, or a client that creates a dialog.  This led to services -implementing two sets of interfaces for the same data, which was more -expensive to develop and maintain.
    -
  8. -
  9. Allow for easy changes to the -layout of data in views.  This is from the point of view of -a debugger implementer that would like to modify the standard layout of -debugger data.
    -
  10. -
  11. Allow the users to modify the -layout of data in views.  And this is a logical extension -of the previous goal.
  12. -
-

-That's a pretty ambitious set of goals to keep in mind, which partly -explains why the design is not fully complete yet.  In particular, -the last goal doesn't have any implementation at this point.  But -other than that the, we believe that our current design mostly -meets the other goals.  It remains to be seen how well it will -hold up -beyond a prototype implementation.
-

The DSF data model is divided into two parts: a non-UI part that -helps services expose data in a consistent form, and a UI part that -helps viewers present the data.  They are described separately in -the two sections below.
-

-

Timers Example

-

A "timers -example" is included with the DSF plugins which -demonstrates the use of data model and view model -APIs.   It is probably much easier to digest this document -when referring to this example for usage.
-

-

Data Model API (org.eclipse.dd.dsf.model)
-

-As stated before, the aim of this API is to allow services to provide -data with just enough common information, so that it can be easily -presented in the view, but with a simple enough design, so that the -data can be accessed by non-viewer clients.  The type of data in -services can vary greatly from service to service, some data for -example:
- -The data model API tries to find a common denominator for these -divergent properties and imposes the following restrictions:
-
    -
  1. Each "chunk" of data that comes from a service has a -corresponding IDataModelContext (Data Model Context) -object.
    -
  2. -
  3. The DM-Context objects are to be generated by the data model services (IDataModelService) with either -synchronous or asynchronous methods, and taking whatever arguments are -needed.  Put differently, how DM-Contexts are created is up to the -service.
  4. -
  5. The service will provide a method for retrieving each "chunk" of -model data (IDataModelData) -using a method that requires no other arguments besides the DM-Contexts.
  6. -
-

DM-Context (IMContext)
-

-The DM-Contexts are the most -important part of this design, so they warrant a closer look.  The -interface is listed below:
-
    public interface IDataModelContext<V extends IDataModelData> extends IAdaptable {
public String getSessionId();
public String getServiceFilter();
public IDataModelContext[] getParents();
}
-First of all the object extends IAdaptable, -which allows clients to use these objects as handles that are stored -with UI components.  However the implementation of IDataModelData.getAdapter()  -presents a particular challenge.  If the standard platform method -of retrieving an adapter is used (PlatformObject.getAdapter()), -then there can only be one adapter registered for a given DM-Context class, -which has to be shared by all the DSF sessions that are running -concurrently.  Thus one debugger that implements a IStack.IFrameDMContext, would have to -have the same instance of -IAsynchronousLabelAdapter as another debugger implementation -that is running at the same time.  To overcome this problem, DSF -provides a method for registering adapters with a session using DsfSession.registerModelAdapter(), -instead of with the platform (Platform.getAdapterManager().registerAdapters()).  -
-

The getSessionId() -method serves two purposes.  First, it allows the -IAdapter.getAdapter() -implementation to work as described above. Second, it allows clients to -access the correct dispatch thread (DsfSession.getSession(id).getExecutor()) -for calling the service that the DM-Context originated from. 
-

-

The getServiceFilter() -method is actually included to allow future development.  It is -intended to allow the client to precisely identify the service that -the DM-Context originated from, without having to examine the exact class type -of the DM-Context.  But this functionality will not really be needed -until we start writing generic/data-driven clients.
-

-

The getParents() -method allows the DM-Context to be connected together into something that can -be considered a "model".  Of course, most debugger data objects, -require the context of other objects in order to make sense: stack -frame is meaningless without the thread, debug symbols belong to a -module, which belongs to a process, etc.  In other words, there is -some natural hierarchy to the data in debug services which needs to be -accessible through the data model APIs.  This hierarchy may be the -same hierarchy that is to be shown in some debug views, but it doesn't -have to be.  More importantly, this hierarchy should allow for a -clean separation of debug services, and for a clear dependency graph -between these services.

-

View Model API (org.eclipse.dd.dsf.ui.model)
-

-This is the component which allows the DSF data model to be presented -in -the views with different/configurable layouts.  It is tightly -integrated with the recently added (and still provisional) -flexible-hierarchy viewers in the org.eclipse.debug.ui -plugin (see EclipseCon 2006 presentation -for more details).  Actually, the platform flexible hierarchy -framework already provides all the adapter interfaces needed to present -the DSF data model in the viewers, and it is possible to do -that.  However the flexible hierarchy views were not specifically -designed for DSF, and there are a few ugly patterns that emerge when -using them with DSF data model interfaces directly:
- -The view model API tries to address these issues in the following way:
-
    -
  1. It divides the adapter work for different views in separate ViewModelProvider objects.
  2. -
  3. It defines the view layout in an object-oriented manner using the - IViewModelLayoutNode -objects.
  4. -
  5. It consolidates the logic of switching to dispatch thread in one -place, and allows the ViewModelProvider -objects to work only in dispatch thread.
    -
  6. -
-

IViewModelLayoutNode

-The core of the logic in this design lies in the implementation of the IViewModelLayoutNode objects. -This interface is listed below:
-
public interface IViewModelLayoutNode {
public IViewModelLayoutNode[] getChildNodes();
public void hasElements(IViewModelContext parentVmc, GetDataDone<Boolean> done);
public void getElements(final IViewModelContext parentVmc, GetDataDone<IViewModelContext[]> done);
public void retrieveLabel(IViewModelContext vmc, final ILabelRequestMonitor result);
public boolean hasDeltaFlags(IDataModelEvent e);
public void buildDelta(IDataModelEvent e, ViewModelDelta parent, Done done);
public void sessionDispose();
}
-The getChildNodes() -method allows these layout nodes to be combined into a tree structure, -which mimics the layout of elements in the view.  What the -children are depends on the implementation, some may be configurable -and -some may be fixed.
-
-The hasElements() -and getElements() -methods generate the actual elements that will appear in the -view.  The methods are analogous to the flexible hierarchy API -methods: IAsynchronousContentAdapter.isContainer() -and IAsynchronousContentAdapter.retrieveChildren() -and are pretty straightforward to implement. Also retrieveLabel() -is directly analogous to -IAsynchronousLabelAdapter.retrieveLabel(). 
-
-The hasDeltaFlags() -and buildDelta() -are used to generate model deltas in response to service events. These -are discussed in the next section.
-
-Finally, in most cases the elements in the views correspond -directly to an IDataModelContext -(DM-Context) objects of a specific type.  In those cases, the DMContextVMLayoutNode -abstract class implements the common functionality in that pattern.
-

Model deltas

-The hasDeltaFlags() and buildDelta() methods are used -to implement the IModelProxy adapter, -and are the most tricky aspect of this design.  The difficulty is -that the flexible hierarchy views require that the IModelProxy translate data -model-specific events, into generic model deltas that can be -interpreted by the viewer.  The deltas (IModelDelta) are tree -structures which are supposed to mirror the structure of nodes in the -tree, and which contain flags that tell the viewer what has changed in -the view and how.*  This means that if the -model proxy receives an event for some IDataModelContext (DM-Context) object, -it needs to know if this object is in the viewer's tree, and what is -the full path (or paths) that leads to this object. 
-

The model delta is generated by first calling the top layout node's hasDeltaFlags() with the -received event, which then can either return true or ask any of its -children if they have deltas (which in turn returns true or calls its -children, etc).  If a node returns true for hasDeltaFlags(), then the -asynchronous buildDelta() -is called with the event and a parent delta node, to generate the delta -elements and flags for its node.  Once the layout node generates -its delta objects, it still needs to call its children, which in turn -add their delta information, and so on.
-

-

* It's not strictly true that a full path to -an element always has to be present for model delta's to work.  If -the full path is not present, the viewer will try to find the element -using an internal map that it keeps of all of the elements it -knows.  -But since the viewer is lazy loading, it is possible (and likely) that -the element affected by an event is not even known to the viewer at -time of the event, and for some delta actions, IModelDelta.SELECT and IModelDelta.EXPAND, this is not -acceptable.
-

- - diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_mi_instructions.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_mi_instructions.html deleted file mode 100644 index 7d6e2b51153..00000000000 --- a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_mi_instructions.html +++ /dev/null @@ -1,135 +0,0 @@ - - - - - GDB/MI Debugger on top of DSF - Instructions - - -

GDB/MI Debugger implementation based on DSF

-
-
-

Buiding and Running Instructions
-

-

To build:

-
    -
  1. Install the latest milestone of Eclipse 3.3 SDK
  2. -
  3. Install the latest milestone of CDT 4.0
  4. -
  5. Install and configure gdb (cygwin on windows)
  6. -
  7. Check out following projects from - /cvsroot/dsdp/org.eclipse.dd.dsf/plugins -
  8. - -
-

To run:

-
    -
  1. Create a new "Managed make build project" called "hello".
  2. -
  3. Create a simple hello.c source file:
  4. -
-
-
#include <stdio.h>
int main(void) {
printf("Hello world");
}
-
-
    -
  1. Build the project.
    -
  2. -
  3. Create a new "DSF C/C++ Local Application"  launch -configuration (one with the pink icon) and set the executable and entry -point to "main"
    -
  4. -
  5. Launch and step through.
  6. -
  7. If the "source not found" page appears, the a path mapping needs -to be created.  This is an issue with latest cygwin gdb.
    -
  8. -
      -
    1. Click on the "Edit source lookup" button in the editor, or -right click on the launch node in Debug View and select "Edit source -lookup"
    2. -
    3. Click on the "Add..." button
    4. -
    5. Select "Path Mapping" and click OK.
      -
    6. -
    7. Select the new "Path Mapping" source container and click the -"Edit..." button.
    8. -
    9. Once again, click the "Add..." button to create a mapping.
    10. -
    11. Enter the path to map from.  Look at the stack frame label -in Debug view, if the filename is something like -"/cygdrive/c/workspace/hello/hello.c", enter the path to the first real -directory "/cygdrive/c/workspace".
    12. -
    13. Enter the correct path to the directory entered above, in the -file system.  In example above, it would be "C:\workspace".
    14. -
    15. Click OK three times and you'll be back in Kansas.... ehm Debug -view that is.
    16. -
    17. If the source doesn't show up right away, try stepping once.
    18. -
    -
-
-

Supported Platforms
-

-Currently only Windows with cygwin GDB is supported. -
-
-
-

Current Features
-

- -
-
Updated Aug 25th, 2006
-
- - diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-1.png b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-1.png deleted file mode 100644 index b593371ee80..00000000000 Binary files a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-1.png and /dev/null differ diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-2.png b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-2.png deleted file mode 100644 index 0af43dc6a77..00000000000 Binary files a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model-2.png and /dev/null differ diff --git a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model.html b/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model.html deleted file mode 100644 index 8380a0e4ae3..00000000000 --- a/plugins/org.eclipse.dd.doc.dsf/docs/dsf_services_model.html +++ /dev/null @@ -1,363 +0,0 @@ - - - - - DSF Services Model - - -

DSF Services Model

-
-Version -1.0
-Pawel Piech
-
© 2006, -Wind River Systems.  Release under EPL -version 1.0.
-
-

Debugger Services Framework (DSF) is primarily a service framework -defining rules for how -services should be registered, discovered, organized into functional -groups, communicated with, and started/ended.  These rules help to -organize the services into a functional system that efficiently -abstracts various debugger back end capabilities. 

-

DSF services build on top of the OSGI services framework, so -it's important to understand OSGI services before looking at DSF -itself.  For an overview of OSGI including services, see the presentation -on OSGI from EclipseCon 2006.  For detailed information, see -OSGI javadocs, primarily: org.osgi.frameworkServiceRegistration, -BundleContext, ServiceReference, Filter, and ServiceTracker. -

-

Services
-

-In OSGI any class can be registered as a service.  In DSF, -Services must implement the IDsfService  -interface, which requires that the service -provide:
-
    -
  1. Access to the DsfExecutor that -has to be used to access service methods.
  2. -
  3. Full list of properties used to uniquely identify the service in -OSGI.
  4. -
  5. Startup and shutdown methods.
  6. -
-For the first two items, a service must use the data it received from -its constructor.  For the third item, a service must register and -unregister itself with OSGI.  But beyond that, this is all that -services have in common, everything else is up to the specific service -interface.
-

Sessions (org.eclipse.dd.dsf.service.DsfSession)
-

-DSF services are organized into logical groups, called -sessions.  Sessions are only necessary because we want multiple -instances of systems built with DSF services to run at the same -time  This is because there is only a single OSGI service -registry, so if multiple services are registered with a given class -name, OSGI will not be able to distinguish between the two based on the -class name alone.  So there is an additional property which is -used by every DSF service when registering with OSGI, IDsfService.PROP_SESSION_ID.  -
-

A Session object -(TODO: link javadoc) has the following data associated with it:
-

- -

The Session class also has a number of static features used to -manage Session objects:

- -

Startup/Shutdown

-Managing the startup and shutdown process is often the most complicated -aspect of modular systems.  The details of how the startup and -shutdown processes should be performed are also highly dependent on the -specifics of the system and service implementations.  To help -with this, DSF provides two simple guidelines:
-
    -
  1. There should be a clear -dependency tree of all services within a session - When the -dependencies between services are clearly defined, it is possible to -bring-up and bring-down the services in an order that guarantees each -running service can access all of the services that it depends on.
  2. -
  3. There needs to be a -single point of control, which brings up and shuts down all the -services. - In other words, services should not initialize or -shut-down themselves, based on some global event that they are all -listening to.  But rather an external piece of logic needs to be -in charge of performing this operation.
  4. -
-The main implication of the first guideline, is that each service can -get and hold onto references to other services, without having to -repeatedly check, whether the service references are still valid.  -This is because if a given service is to be shut-down, all services -that depend on this service will already have been shut down.  The -second guideline, simply ensures that startup and shutdown procedures -are clear and easy to follow.
-

org.eclipse.dd.dsf.service.DsfServicesTracker -vs org.osgi.util.tracker.ServiceTracker

-OSGI methods for obtaining and tracking services can be rather -complicated.  To obtain a reference to a service, the client has -to:
-
    -
  1. Get a reference to a BundleContext - object, which can be retrieved from the plugin class.
  2. -
  3. Obtain a service reference object by calling BundleContext.getServiceReference();
  4. -
  5. Obtain an instance of the service by calling BundleContext.getService(ServiceReference).
  6. -
-But worst of all, when the client is finished using the service, it has -to call BundleContext.ungetService(ServiceReference), -because the bundle context counts the used references to a given -service.  All this paperwork is useful for services which manage -their own life-cycle, and could be un-registered at any time.  To -make managing references to these kinds of services, OSGI provides a -utility class, called ServiceTracker.  -
-

For DSF services, the life cycle of the services is much more -predictable, but the process of obtaining a reference to a service is -just as onerous.  DSF provides its own utility, which is -separate from the ServiceTracker, -named DsfServicesTracker.  -The differences between the two are listed in table below:
-

- - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Property
-
OSGI - ServiceTracker
-
DSF - DsfServicesTracker
-
Number -of services tracked
-
While -not strictly limited, it is optimized for tracking services of a single -class type, or more typically to track a single service reference.
-
Designed -to track services within a single DSF session. 
-
When -are service references obtained
-
Obtain -references automatically as the services register themselves.
-
Service -references are obtained as requested by the client, and cached.
Synchronization
-
Multi-thread -accessible. 
-
Can -be accessed only on the session's dispatch thread.
-
Clean-up
-
Automatically -un-gets references for services that are shut down.
-
Client -must listen to session events, and clean up as needed.
-
-

Both trackers are useful.  Service implementations that depend -on a number of other services are most likely to use DSF ServicesTracker, while some -clients, which use a single service may find OSGI ServiceTracker more suitable.
-

-

Events

-Events are the most un-conventional component of the services package -and probably most likely to need modifications to the design by the -community.  The design goal of -the event system is to allow a hierarchy of event classes, where a -listener could register itself for a specific event class or for all -events which derive from a base class.  The use case for this -behavior is in the data model, where we would like to have the ability -to capture all model-related events with a generic listener while at -the same time allowing for services to fully use class types. 
-

The event model is made up of the following components:
-

- -There are only a few more notes about the events mechanism:
-
    -
  1. The event is always dispatched in its own Runnable submitted to -the session's DsfExecutor.
  2. -
  3. There is a slight convenience for clients not to have to register -for each type of event separately.
  4. -
  5. There is a slight inconvenience for clients, because anonymous -classes cannot be used as listeners, due to the public class -requirement.
  6. -
-

Debugger Services (org.eclipse.dd.dsf.debug)
-

-DSF framework includes a set of service interfaces for a typical -debugger implementation.  Functionally, they are pretty much -equivalent to the platform debug interfaces, but they are structured in -a way that allows a debugger to implement only some of them.  In -order for the startup and shutdown process to work effectively, the -dependencies between services need to be clearly defined.  The -dependencies between the main service interfaces are shown in the graph -below:
-
-

It's also important to realize that it's unlikely that a single -hierarchy of interfaces will adequately fit all the various debugger -use cases, and it is likely that some interfaces will be needed which -partially duplicate functionality found in other interfaces.  -An example of this in the proposed interface set are the interfaces -which are used to initiate a debugging session.  The INativeProcesses service is -intended as the simple abstraction for native debuggers, where a -debugger only needs an existing host process ID or an executable image -name.  Based on this a INativeProcess -debugger implementation should be able to initiate a debugging session, -and return run-control, memory, and symbol contexts that are required -to carry out debugging operations.  By comparison, IOS and ITarget are generic interfaces -which allow clients to manage multiple target definitions, to -examine a wide array of OS objects, and to attach a debugger to a -process or some other debuggable entity. 
-
-

-

Disclaimer

-Drafting large APIs that are intended to have many implementations and -by clients is a notoriously difficult task.  It is -impossible to expect that a first draft of such interfaces will not -require changes, and only time and multiple successful implementation -can validate them.  While we can draw upon many examples of -debugger -APIs in Eclipse in and our commercial debugger, this is a new API with -a -prototype that exercises only a small portion of its interfaces.
-
-
-
-
-
-
-
-
-
-
- - diff --git a/plugins/org.eclipse.dd.doc.dsf/toc.xml b/plugins/org.eclipse.dd.doc.dsf/toc.xml index cdda79bb01c..0c117d28640 100644 --- a/plugins/org.eclipse.dd.doc.dsf/toc.xml +++ b/plugins/org.eclipse.dd.doc.dsf/toc.xml @@ -4,12 +4,14 @@ - - - - - - - + + + + + + + + +