You are viewing jefftrull

jefftrull
12 June 2013 @ 03:51 pm
In my last post I suggested I might look into implementing one of the well-known Model Order Reduction techniques, and after reading the classic PRIMA paper I decided it was worth trying. A lot of fancy matrix math is involved, and I can't claim to understand it completely, but the main idea is to incrementally create a matrix representing a transformation from the original set of states describing an interconnect network (i.e., node voltages, source currents, inductor currents) to a new, smaller set of states which is faster to simulate. This matrix (called X in the paper) has as many rows as states in your original system, and as many columns as the number of states you want in your reduced system - you simply stop accumulating columns when you have enough. Generation of new columns relies heavily on the QR decomposition, which Eigen supports in several flavors.

I applied PRIMA to my previous signal integrity example, reducing its state count from 6 (the "natural" value, consisting of its node voltages) to 4, simulated with odeint, and plotted the results. The waveforms shown are the noise injected at the victim receiver:

prima_transient

As a former chip guy I don't think I'd be happy with leaving this much noise margin on the table, but I imagine that you could write your signal integrity analysis tools to quantify the error involved with a given state count, and add more when needed. It also seems likely that real world interconnect models produced by extraction tools would have far more elements, making this trade-off more appealing.

The code may be found here.
 
 
jefftrull
22 May 2013 @ 06:58 pm
In a previous entry I described some experiments using a newly announced open-source numeric integration library called odeint. In the meantime it has been accepted into Boost and I've learned just enough about linear algebra to follow up my comment from the end:

The next logical step would be to take an arbitrary RC network, apply KCL, rearrange the resulting equations to isolate dx/dt, and thus automatically produce a model suitable for use with odeint

It turns out that there is a well-known procedure called Modified Nodal Analysis that maps circuit elements into a pair of matrices representing the system of equations that needs to be solved. Using this procedure, any circuit can be automatically turned into its equivalent matrix form. Here's how this works in a simple RLC tank circuit:

rlc

We can write two equations representing KCL for the input and output circuit nodes, and another two to represent the input source's current and the current through the inductive load, to get four equations in four unknowns (our state variables):

kcl

Separating the coefficients of the voltage and current time derivatives and rewriting in matrix form produces:

MNA

You can see that each component type produces a characteristic stamp within the susceptance or conductance arrays, at the rows and columns corresponding to their connecting nodes. Values from additional components are simply summed if they overlap. This is the heart of MNA.

Now all that remains is to rewrite this equation so that each first derivative term is alone in its own row - or looked at another way, that the susceptance array is the identity matrix. This is evidently the same thing as "solving" the system of equations. Several open-source linear algebra libraries are available to do this work; I happened to choose Eigen due to its friendly reputation and also because it had the exact code I needed in one of its tutorial examples. The resulting code produced exactly the same results, but now has fewer lines and is implemented in a more general way. You can find it here.

I can now see a path to extending it for larger networks, perhaps reading from a file - SPEF parser, anyone? Also, the MNA representation allows for further processing - if I get ambitious I may try to implement some well-known Model Order Reduction algorithms, or do some graph processing (locating disconnected nets or resistive loops in the data).
Tags: , , , ,
 
 
jefftrull
When I describe my previous work  using ExtJS data grids in Visualforce pages to people, one of the first questions I usually get is "can this be done with jQuery instead?"  I've recently found the time to do a thorough investigation of this question and I'm happy to say the answer is "yes".

I began by researching jQuery data grids.  If you're new to this style of UI widget, it's basically a dynamic spreadsheet that displays, handles paging for, and provides editing capability to, tables of data.  I considered  jqGrid, DataTables, and SlickGrid.  At the time, SlickGrid seemed the most polished.

I also needed to pick an MVC framework - one is included as part of ExtJS, but their framework is unusually well-integrated;  most jQuery client-side apps seem to be built by mixing and matching separate libraries, and this was the situation I found myself in.  I ended up selecting Backbone.js  due partly to its popularity, but also due to the fact that someone had already done the work of integrating it with SlickGrid - the aptly-named Slickback.  Furthermore, it turned out that Backbone had a key feature that enabled me to use it easily with Salesforce.com: the Backbone.sync  method.  Backbone routes all server communication through this method, allowing users to intercept the four CRUD actions and replace them with whatever code is needed - in my case, the same four RemoteAction methods I built for the ExtJS proxy.

The resulting dynamic grid component has most of the features of my previous ExtJS version and may be found here  (pick up the Slickback_Test page, and the SlickbackComponent and Slickback_Date_Editor components).  You'll also need to get Slickback and upload it as a static resource;  I used this link.

Based on my experiences implementing the two approaches (ExtJS and Backbone/SlickGrid) I still feel that ExtJS has some real advantages for this work.  Here are the most important ones, to me:
  1. ExtJS is a polished and tightly integrated framework with a rich set of widgets and a consistent, overarching philosophy about how to design web apps.  The MVC parts are designed to work together smoothly with the UI widgets;  once you've built a model/store/proxy stack you can attach any one of their widgets and it "just works".  By contrast, with jQuery I was using Underscore, Backbone.js, SlickGrid, and jQuery UI, and gluing them together with extra code for inheritance and event handling.  It was possible, but it seemed like a lot of extra work.
  2. Backbone as an MVC framework is good, but less typed than I would have liked.  I would prefer each attribute on a model to have a type option that indicates, among other things, how to store it, how to parse data coming from the server (and store it back), what values are valid, etc.  Backbone has a parse option that takes care of interpreting inbound server data, but that's it.  Instead the metadata about each attribute is spread around the code, and what the models actually store are basically blobs with names.
  3. The semi-manual event handling required to integrate SlickGrid and Backbone made a key feature - batch update - difficult enough that I postponed it for a later rev.  Basically a "batch update" mode allows you to make a number of changes to the grid, and then save them all at once.  In ExtJS this is controlled by a single store option (autoSync) but to implement this with Backbone would have meant adding a dirty attribute to each model, plus some way of showing the modified state in SlickGrid, and probably some additional event connection code.
  4. Slickback was very helpful to have, and certainly made things a lot easier, but it doesn't seem to be actively maintained (last commit 9+ months ago).  SlickGrid has a model-like concept, and Backbone has a view-like concept;  both projects are moving forward and are likely to provide better ways to integrate in the future that may obviate Slickback.  Finally, Slickback re-implements the SlickGrid editors and only provides a subset of them (I had to make my own Date editor).
Notwithstanding the above, Backbone and SlickGrid have some useful advantages that are worth mentioning:
  1. They utilize the jQuery deferred mechanism.  This provides a nice way to organize code that has work that must occur after a server request returns.  Normally this is done with a big tree of asynchronous callbacks; deferred enables a much nicer way.  A good tutorial is  here.  I found this made for cleaner code than with ExtJS.
  2. The Salesforce.com back-end integration is actually easier, because we don't have to convert between the ExtJS Direct version Salesforce uses for its Remoting technology, and the more recent ones within the ExtJS library.  We just supply the four CRUD functions, and no hacks.
  3. Finally (and the reason many people cite to avoid ExtJS) is licensing.  Sencha does provide a GPL option - which I use - but if you want to hide your source you'll need a commercial license.  It's cheap compared to developer time, IMO - details here.
In summary, you can make a nice grid widget for a Visualforce page using jQuery and a handful of other libraries.  It's not quite as slick as with ExtJS but if you're a jQuery fan this may be the approach for you.
 
 
jefftrull
A recent client has seen fit to open source some of the work I did for them in the area of 3D graphics.  They had a holographic display system that expected users to supply 3D models for display in the form of Ogre3D mesh files.  Ogre is a hardware- (and rendering layer-) independent open source library for doing 3D graphics, and was my own introduction to the field.  The "mesh" files it uses are an Ogre-specific format that corresponds closely to the hardware data structures used, for example, in OpenGL.  Unfortunately the largest sources of public 3D models (e.g., Google's 3D Warehouse) use a different format known as Collada;  this format is nearly universally accepted as input or generated as output by popular 3D rendering tools (with varying degrees of standard conformance and interpretation).

At some point it became clear that we would benefit from having our own Collada to Ogre mesh converter, so I investigated implementation options and existing code.  The only high-level open source Collada parser library that is currently being maintained is OpenCollada;  it does purport to have an Ogre mesh converter, but I found that instead of converting the contents of an entire Collada file (consisting of numerous geometries and instances arranged in a scene hierarchy) it would simply output the most recent geometry it found, as-is.  Nevertheless it seemed like a good starting point, and so I began there.

For a good overview of the current state of Collada parsing libraries see this post from the Collada forums.  Note that if your target system is WebGL/Javascript instead of C++, you are in pretty good shape thanks to libraries like GLGE and Three.js, both of which contain Collada parsers.

After thinking about what my client would want to do with Collada data, it seemed to me that there were two general cases:
  1. Treating an entire Collada file as a single selectable/movable object.  This would apply in cases where the object was a "leaf" element of a scene and the internal hierarchy was uninteresting.  For this we would need to build a single mesh from the scene hierarchy present in the file, copying and transforming the geometry elements as appropriate.
  2. Replicating the scene hierarchy from Collada inside Ogre.  Perhaps the hierarchy carried some information, or we wanted to reduce memory use (since individual geometries/meshes might be repeated).  In that case loading the scene graph "live" within Ogre made the most sense.
It seemed to me that a lot of the tricky code, especially in mesh generation, could be shared between these two approaches, and I managed to implement it accordingly.  In the heart of the code I rely on the Ogre ManualObject class to build up a mesh one vertex at a time;  ManualObject has a method to convert itself to a Mesh which can then be written to disk.  Unfortunately this approach requires a window system;  I haven't found a way around this yet, and so the mesh conversion process pops up a small window each time it converts a file.  Nevertheless it offers both command-line conversion of Collada to mesh files and a scene loader you can use from within an Ogre application.

No graphics-related post would be complete without a picture, so here is a screen capture of running the scene loader on the rubber duck model supplied as part of GLGE:


The code may be found here.  I think there are probably a lot of improvements that could be made and I look forward to receiving bug reports and pull requests.  Please send them!

 
 
jefftrull
25 May 2012 @ 07:05 pm
It's a common pattern in EDA applications to have a Qt-based GUI along with a Tcl shell for scripting and for access to the deeper functionality of the tool.  Often the GUI is used for viewing results and debugging, then once a reasonable design flow is established, the GUI is disabled and the tool is run in batch mode via a script.  Although as a scripting language Tcl is showing its age, its entrenched position among chip designers means it's unlikely to go away soon and we will likely continue to see this combination.  As a result it's worthwhile to consider how Tcl and Qt interact.

In interactive applications of this type you often want users to be able to switch between the terminal (which may be the shell from which they launched the tool, or a command window embedded in your application) and the GUI while using only a single thread.  Unfortunately both Qt and Tcl normally run their own "event loops" - that is to say, a typical main program looks like this:

    do_some_setup;   // user adds hooks here
    run_main_loop;   // execute commands, process events, etc. for duration of process
    return 0;


So it seems you have to pick one library or the other to "own" the main loop of your program.  Fortunately both Tcl and Qt are cooperative, and supply functions you can call to let them process any pending events.  So you can make your own event loop that calls those functions, and get the desired interactivity.  I tested this using the Qt "animated tiles" demo (in main.cpp):

#include <tcl.h>
...
// supply external event loop for Tcl to use after it initializes:
Tcl_SetMainLoop([]() {  // (C++11 lambda syntax)
                        while (true) {
                          QApplication::processEvents();
                          Tcl_DoOneEvent(TCL_DONT_WAIT);
                        }
                     });

// create a Tcl interpreter and connect it to the terminal
Tcl_Main(argc, argv, [](Tcl_Interp*)->int{return 0;});

// return app.exec();


With this approach we now have smooth interactivity between the shell from which this demo was launched and the GUI.  But wait, what's that fan sound?  Why is my lap getting warm?  Oh right...  this is a polling loop, and it never sleeps.  We are constantly checking with Qt and Tcl to see if they need to process anything.  Maybe what we can do is put a small sleep in between each iteration:

Tcl_SetMainLoop([]() {
                       Tcl_Time wakeup_period;
                       wakeup_period.sec = 0;
                       wakeup_period.usec = 100000; // 10 times per second for interactivity

                       while (true) {
                         QApplication::processEvents();
                         Tcl_WaitForEvent(&wakeup_period);
                         Tcl_DoOneEvent(TCL_DONT_WAIT);
                       }
                     });


OK, that's a bit better.  "top" is no longer showing anything particularly bad and I can't hear the fan anymore.  Unfortunately the animation no longer looks smooth - probably because it's doing all its work in 10 bursts each second, instead of spread out appropriately.

At this point I remember that underneath the hood, neither Qt nor Tcl does polling in their own event loops.  Instead, they register what they're interested in and let the operating system wake them up when something happens.  In Unix this happens through the select() system call.  What I really need to do is have Qt and Tcl work with each other so they can both go to sleep and wait for events of interest to either one of them, then dispatch appropriately.

After a look at the documentation I find that Tcl has had support for event loop integration going back to Motif.  Tcl only requires that you supply a Notifier, which is just a struct of function pointers that provide an API for Tcl to register its interest in particular file descriptors, and to register for timer callbacks.  The example I followed is here.  On the implementation side, Qt provides a nice API for monitoring activity on file descriptors through the QSocketNotifier class; anytime something of interest happens Qt emits a signal that we can listen for and pass on to Tcl.  The resulting code is here, and the main program from above now looks like:

QtTclNotify::QtTclNotifier::setup();  // registers my notifier with Tcl

// tell Tcl to run Qt as the main event loop once the interpreter is initialized
Tcl_SetMainLoop([](){QApplication::exec();});

// create a Tcl interpreter and connect it to the terminal
Tcl_Main(argc, argv, [](Tcl_Interp*)->int{return 0;});


Now the animation is smooth, the shell is responsive, and the CPU load is reasonable.  Success!

 
 
jefftrull
I just returned from this year's C++ Now (formerly BoostCon) conference, where one of my favorite presentations was from a couple of students at University of Potsdam, who have released an open-source numeric integration tool for ordinary differential equations, called odeint.  Their library handles any differential equation of one independent variable (say, "t") that can be written this way:

dx/dt = f(x, t)
where the state variable may be a vector.  Higher-order (second, third, etc.) derivatives are handled by modeling derivatives as state variables themselves.

Of course as I was watching the presentation I immediately thought of circuits, and in particular the linear circuits that arise when modeling interconnect.  Optimizing routing for speed, signal integrity, etc. is a thorny problem and in the systems I'm aware of is usually handled by rules of thumb produced from repeated Spice runs on your process technology.  Being able to run an accurate simulation of a routing configuration directly in your tools would be incredibly empowering.  So I decided to give it a try.

My first test case was a basic damped RLC circuit with a step (voltage) input.  I needed three state variables for this: the input voltage, the inductor current, and the output voltage.  KCL on the output node gave me the only interesting one of the three dx/dt equations, and the output (viewed in gnuplot) looked great.  I decided to try a signal coupling scenario next.

The coupling scenario was more complex than the RLC circuit, but even so most of the dx/dt equations came directly from KCL.  The tricky part came when considering the nodes on either side of the coupling capacitor connecting the aggressor and victim wires.  The current through this capacitor is a function of the difference in the derivatives of the nodes on either end;  you end up with two equations (KCL on the nodes) in two unknowns (the two node derivatives).  I substituted manually and got the simulation running:

To check accuracy I built the same circuit in NGSpice and compared results from .MEASURE statements to raw data extracted from my simulation and I have a pretty good match (within 0.5%) on delay and an excellent match (within 0.1%) on the victim receiver voltage.  The source code is here and my NGSpice deck is here.

The next logical step would be to take an arbitrary RC network, apply KCL, rearrange the resulting equations to isolate dx/dt, and thus automatically produce a model suitable for use with odeint.  I don't currently know how to do this, but I've a feeling matrix algorithms are involved :)
 
 
jefftrull
30 April 2012 @ 08:03 pm
I've been meaning to experiment with Field Sets since I first read about them last year.  Because they are accessible via "clicks not code" they are more accessible to administrators, plus may be changed independent of the code in the case of a managed package. As I began to review the supplied examples, though, I became concerned - they all showed field sets being used via the '{!xx[yy]}' syntax to access the value, not the name, of the chosen fields.  I needn't have worried, though, as using this syntax directly in the text:

{!$ObjectType.Contact.FieldSets.CTFields}

resolves to a comma+space separated list of field names.  Some minor changes to my JavaScript and the controller allows it to accept either hardcoded field lists or a fieldset.  Updated code may be found on my github as usual.

For now field sets aren't available in Visualforce - just Apex - but they will be very soon.  The amazing Abhinav Gupta covers the new development as well as a temporary workaround here.

 
 
jefftrull
Ever since I heard about the new Force.com REST API I wanted to try it out and see if it would make implementing the data stores for the ExtJS grids I posted about previously easier, or make them faster.  I finally had the opportunity to try this out, but in the meantime the Javascript Remoting feature was released also, so I decided to implement the work I had previously done with the AJAX Toolkit using both new techniques, and then compare the results on all three.

These implementations are based on ExtJS 3.4 - 4.0 includes a significantly revamped Store concept that I'll need to investigate, but I believe the performance comparisons should be valid.

The Approaches:
  1. AJAX Toolkit: my original approach.  Uses a Javascript SOAP implementation of the existing Web Services API.  It's easy to use and returns objects (SObject, DescribeResult, etc.) that resemble their equivalents in Apex, on which you operate directly.  Unfortunately it's less efficient compared to REST because of the SOAP/XML overhead.  You also have to load an extra Javascript library (connection.js) which may worsen page load times.
  2. REST API: Much like the AJAX Toolkit, but uses the increasingly popular REST approach with your choice of XML or JSON for communication.  It isn't strictly REST - you must use a SOQL query to GET records - but for create, update, and delete it does a good job of matching the REST approach used in other APIs.  A bit of setup is required, though, because REST was (apparently) not originally intended for use in Visualforce pages and there is no "endpoint" on VF servers.  Instead you must use a special proxy, and enable this in your org.  Pat Patterson has a good explanation of the problem and its solution.
  3. Javascript Remoting: This is a new feature which allows controller methods to be "remoted" into Javascript functions.  Primitives and return types are converted automatically for you, and it handles all the communication headaches.  You only need to supply success/failure callbacks.  In addition, this feature is acknowledged to be based on the ExtJS Direct concept, which makes it easier to integrate with ExtJS stores.  I found only two issues of usability: first, although even complex method return values are converted automatically, method arguments can only be primitives.  This means complex structures have to be rendered as strings by the caller and then unpacked in the controller.  Second, remoted methods must be static, which means there can be no per-object state.  This means, among other things, no caching of query results for paging purposes.

transaction_time

You can see that Remoting does quite well on the Describe call (which I use to set up properties of grid columns), but less well on the record transfer (the first grid page, of 18 rows).  Looking at the amount of data transferred is instructive:

data_transfer_comparison

The AJAX Toolkit and REST both deliver a fair amount of data on the Describe call compared to Javascript Remoting.  This is probably due to the fact that I only use a subset of the information available from the call.  My Describe remote method only supplies the information I actually need.  You can also see that Remoting transfers more data than REST, per page.  If you inspect the data transferred (e.g., in Firebug), you can see that JS Remoting uses a fair bit of extra formatting information, presumably to aid deserialization/error checking.

Code for the components, the JS Remoting controller, and a test page are available now on Github.
 
 
 
 
jefftrull
05 December 2010 @ 11:05 am
For some time I've been thinking about the Campaign tree component I described on here previously, and wondering if it would be possible to extend it, using the ExtJS Drag and Drop functionality, to provide a nice hierarchy editing feature.  Since Dreamforce '10 is coming up I spent some extra time this week exploring this question, and it turns out the answer is "yes".  The component and controller are available for your download and experimentation.  Improvements on the original include:
  1. Works for any object with "Name" and "ParentId" fields.  For example, Account hierarchy can be viewed and edited instead with only a change to the component's object attribute.
  2. Loads data on demand instead of at page load time.  This means the page loads faster, but each time you expand a node there will be a delay for the server to return the children of the node.
  3. Uses strictly Visualforce and not the "AJAX Toolkit".  I did this as an experiment (suggested by Abhinav Gupta).  I'm still not sure which is better - this approach requires some tricky use of actionFunction "rerender" and "oncomplete" attributes that may not be as easy for readers of the code (or writers, for that matter :) to understand, but it does work, and arguably is more efficient.
Here's a simple example page using this component:

<apex:page standardController="Campaign">
    <!-- Campaign hierarchy editor using custom VF component based on ExtJS TreePanel widget -->
    <!-- by Jeff Trull 2010-12-03 -->
    <apex:form >
        <apex:pageBlock title="Campaign Hierarchy Editor" tabStyle="Campaign">
            <c:Hierarchy_Editor object="Campaign" fn="rerender_detail"/>
        </apex:pageBlock>
        <apex:actionFunction name="rerender_detail" rerender="campaigndetails">
            <apex:param name="campid_passthru" assignTo="{!campaign.Id}" value=""/>
        </apex:actionFunction>
        <!-- For some reason I have to enclose the following inside an outputPanel for -->
        <!-- conditional render to work properly.  Without it rerender still works, but does not take into account "rendered" -->
        <apex:outputPanel id="campaigndetails">
            <apex:outputText value="Double-click to display details of a Campaign" rendered="{!campaign.id == null}"/>
            <apex:detail title="Selected Campaign Details" rendered="{!campaign.id != null }" subject="{!campaign.id}" relatedList="false"/>
        </apex:outputPanel>
    </apex:form>
</apex:page>

As usual, I'd love to hear from anyone who finds this useful or has comments on the code!
 
 
jefftrull
21 September 2010 @ 05:10 pm
I'd been thinking of making my earlier work on ExtJS grids, trees, and menus into Visualforce components and after some recent encouragement from Abhinav Gupta (whose excellent blog is a great resource for Visualforce developers) I finally got it done.  The data grid was fairly straightforward, but the Campaign hierarchy selectors proved a little bit trickier.  A way to communicate the chosen value to the user's controller or page was needed.  Finally I realized that apex:actionFunction could do what I needed - setting controller values, re-rendering sections of the user's page, and executing arbitrary Apex code, all in one.  At the same time we get a general callback mechanism.  Here's my test page for the Campaign chooser:

<apex:page standardController="Opportunity">
    <!-- A page to test using the campaign menu component -->
    <apex:form >
        <apex:pageBlock title="Selecting Campaigns via Menu Tree" tabStyle="Campaign">
            <c:Campaign_Select_Menu fn="rerender_detail"/>
        </apex:pageBlock>
        <apex:pageBlock title="Selecting Campaigns via TreePanel">
            <c:Campaign_Select_Tree fn="rerender_detail" allowinactiveselect="true"/>
        </apex:pageBlock>
        <apex:pageBlock title="Selected Campaign Details" id="campaign_pageblock" tabStyle="Campaign">
            <apex:pageBlockSection columns="4">
                <apex:outputField id="campid" value="{!opportunity.campaignid}"/><br>
            </apex:pageBlockSection>
        </apex:pageBlock>
        <!-- Create an action to update the campaign ID from the selected value and rerender the campaign information -->
        <apex:actionFunction name="rerender_detail" rerender="campaign_pageblock">
            <!-- It seems if you call this param "ID", bad things will happen -->
            <apex:param name="sekritid" assignTo="{!opportunity.campaignid}" value=""/>
        </apex:actionFunction>
    </apex:form>
</apex:page>

The apex:actionFunction tag above creates a JavaScript function called rerender_detail, which, when called, sets the opportunity CampaignId to the selected value and rerenders the campaign_page section.  You could also specify an "action" parameter which would call a controller function.

The datagrid component is used like this:

<apex:page >
    <c:ExtJS_Data_Grid_from_SObject object="Contact" fields="Id,FirstName,LastName,Birthdate,Email" rows="18" minimized="true"/>
</apex:page>

The "minimized" parameter will bring up the grid in collapsed form, which is good if it contains a lot of data that the user may not want to see (the load from the server doesn't happen until the grid is expanded).

The Campaign menu, Campaign Tree, and Data Grid components are now available online.  I'd like to hear any thoughts or feedback people have, so please comment.