Hibernating Rhinos

Zero friction databases

Does a blog post have an image?

Following on my previous post, a different requirement that I may have, is to have a picture for each blog post. I say may have, because this is how the designer designed the website. In the HTML that I got, each blog post has a picture. But in practice, none of Hibernating Rhinos blog and Ayende blog has a picture for each blog item. This is not even a requirement for those blogs. After all, blog posts are all about text, so it perfectly valid to have a blog post without an image.

At fist I thought to just change the design and remove the picture. But than I thought well some of the blog posts do have an image or two in them. What I just parse the blog content and look for an <img /> tags there? I can use them! It may be worth to test this.

A quick search led me to this answer on StackOverflow.

Based on that, I modified the code in the previous post to the following:

RegexImg

Very simple. I’ll wait to see how this will behave in practice, with real data. And now I’m thinking that I can even improve this further, be query the actual images pick the best one that match the target dimensions.

I like this too.

Published at

Originally posted at

Async await. Wait them All!

While we’re working on the new website for the company, I had a fair task to complete. The homepage shows two blog posts at the bottom of the page, one from the company blog, which is this blog, and the other from the ayende blog. This is a fair task but also interesting. Let’s see why.

Look on the following snippet of code:

public void Consume(GetLatestBlogPosts message)
{
    var ayendePost = GetLatestBlogPostAsync("http://ayende.com/blog/rss");
    var companyPost = GetLatestBlogPostAsync("http://blog.hibernatingrhinos.com/rss");

    Task.WaitAll(ayendePost, companyPost);

    UpdateIfNew(ayendePost.Result, companyPost.Result);
}

What it does is very simple. The GetLatestBlogPostAsync method returns a Task<BlogItem> which holds the latest post of the requested blog. I run this against two blogs in order to fetch all latest posts asynchronously, so I use Task.WaitAll in order to wait for the both of the tasks to finish. Than I build a LatestBlogPosts entity, which is like a view model, and put it in the database. See UpdateIfNew method for more details:

private void UpdateIfNew(BlogPost ayendePost, BlogPost companyPost)
{
    using (var session = DocumentStore.OpenSession(DocumentStoreHolder.CompanyWebsite))
    {
        var latestPosts = session.Load<LatestBlogPosts>(LatestBlogPosts.IdConst);
        if (latestPosts == null)
        {
            latestPosts = new LatestBlogPosts();
            session.Store(latestPosts, LatestBlogPosts.IdConst);
        }

        if (latestPosts.Ayende == null ||
            latestPosts.Ayende.Title != ayendePost.Title ||
            latestPosts.Ayende.Description != ayendePost.Description)
        {
            latestPosts.Ayende = ayendePost;
        }

        if (latestPosts.HibernatingRhinos == null ||
            latestPosts.HibernatingRhinos.Title != companyPost.Title ||
            latestPosts.HibernatingRhinos.Description != companyPost.Description)
        {
            latestPosts.HibernatingRhinos = companyPost;
        }

        session.SaveChanges();
    }
}

This is the code for LatestBlogPosts entity:

public class LatestBlogPosts
{
    public const string IdConst = "LatestBlogPosts";

    public string Id { get; set; }

    public BlogPost Ayende { get; set; }
    public BlogPost HibernatingRhinos { get; set; }
}

public class BlogPost
{
    public string Title { get; set; }
    public string Description { get; set; }
    public string Link { get; set; }
    public string PubDate { get; set; }
}

Now, let proceed to the interesting staff, the implementation of GetLatestBlogPostAsync method. Let recap what it needs to do: download an rss feed and return the first blog item from it.

My first attempt was the following code:

private readonly HttpClient client = new HttpClient();

public Task<BlogPost> GetLatestBlogPostAsync(string url)
{
    return client.GetAsync(url)
        .ContinueWith(task =>
        {
            if (task.IsFaulted || task.Result.StatusCode != HttpStatusCode.OK)
            {
                return null;
            }

            return task.Result.Content.ReadAsStreamAsync()
                .ContinueWith(task1 =>
                {
                    var xDocument = XDocument.Load(task1.Result);
                    var lastItem = xDocument.Descendants("item").First();
                    var post = new BlogPost
                    {
                        Title = lastItem.Descendants("title").First().Value,
                        Description = lastItem.Descendants("description").First().Value,
                        Link = lastItem.Descendants("link").First().Value,
                        PubDate = lastItem.Descendants("pubDate").First().Value,
                    };
                    return post;
                });
        });
}

Ah, what a complicated code! And even worth, it does not even compile! Why? I don’t know. But hey, I do not even interested to know. This code looks ugly to me, and I know that I can do better. I’m writing the code in Visual Studio 2012 RC, so I can make a use of the C# 5.0 async and await keywords. Or at least I remember me read so a while ago. So, it is time to try it.

NotAwaitable

Ah? GetAsync is a not awaitable method. Why? I know that I can use C# 5.0 in order to generate assemblies that targets .NET 4.0, so what the problem is?

I googled a little bit in order to find out that I need to install the Microsoft.CompilerServices.AsyncTargetingPack NuGet package. After I installed it I was able to make a use of the await keyword. (Hah finally! Wish I have knew this before!).

Now let’s see how the GetLatestBlogPostAsync will look with this?

AsyncMethod

Well, the code now is much more readable and easy to understand. And hey, it works!

I like this!

Tags:

Published at

Originally posted at

Implementing a VirtualizingWrapPanel

This is a guest post by Samuel Jack, a freelancer who has worked with us on the new RavenDb Studio features.

If you’ve used Silverlight or WPF for any length of time, you will know all about Panels – StackPanel, DockPanel, Grid and the like - which handle the positioning of UI elements on screen. You’ll also know about VirtualizingStackPanel, which can work wonders for UI performance when displaying substantial lists of data. It does this by deferring the creation of UI elements for off-screen list items until they are scrolled into view. (If you don’t know about Panels, and want to learn, a good place to start is on Dr WPF’s blog)

What if you have a huge stash of data but don’t want to display it in a boring old list? What if you want items laid out like cards on a table? Silverlight and WPF both include a WrapPanel class which can achieve the desired layout. But as its name suggests, it doesn’t virtualize. Use a WrapPanel in your 10,000-item ListBox, and you’ll be twiddling your thumbs whilst it goes off to create elements you might never see.

This was a problem we faced when we were implementing the new data virtualization features in RavenDb Studio 1.2, which lets you scroll through your entire collection of documents. We wanted the card layout, but WrapPanel wasn’t up to the job of working with huge numbers of items.

So I set about implementing a VirtualizingWrapPanel, which I’ll share with you today. You can see it in action in our sample Netflix browser (switch the Display Style to Card). The code is all available on GitHub.

First I’ll give you a quick refresher on how you make ListBox use a custom Panel, then I’ll run through how one goes about implementing a virtualizing panel in Silverlight.

A Quick Refresher on Using Panels

Here’s a snippet of XAML showing a ListBox using a VirtualizingWrapPanel:

<ListBox x:Name="DocumentsList"
                        ItemsSource="{Binding Items}" DisplayMember="Item.Name">
      <ListBox.Template>
          <ControlTemplate>
              <Border CornerRadius="2" 
                    BorderBrush="{TemplateBinding BorderBrush}"
                    BorderThickness="{TemplateBinding BorderThickness}">
                  <!-- We set the TabNavigation to Cycle on this ScrollViewer to work around a bug which causes the ListBox to loose focus when navigating down (with Down Arrow or Page Down) from the last visible item
              (or even when navigating Up/Down on an item that is only partially visible at the bottom of the screen) -->
                  <ScrollViewer x:Name="ScrollViewer" Padding="{TemplateBinding Padding}" Background="{TemplateBinding Background}" BorderBrush="Transparent" BorderThickness="0" 
                                  TabNavigation="Cycle" IsTabStop="False">
                      <ItemsPresenter />
                  </ScrollViewer>
              </Border>
          </ControlTemplate>
      </ListBox.Template>
    <ListBox.ItemsPanel>
        <ItemsPanelTemplate>
            <VirtualCollection1:VirtualizingWrapPanel ItemWidth="200"
                                            ItemHeight="230"/>
        </ItemsPanelTemplate>
    </ListBox.ItemsPanel>
</ListBox>
The key part is right down at the bottom of the snippet, where we set the ItemsPanel property of the ListBox. There we specify an ItemsPanelTemplate which instantiates the VirtualizingWrapPanel. Notice that you need to tell the panel how big each item is going to be up front, so that it knows how much room to allow for items which are off screen.

There’s one other important point. Notice that I’ve overridden the Template property for the ListBox. This is so that I can tweak the ScrollViewer which is going to contain the VirtualizingWrapPanel. The tweak is a small one: I set its TabNavigation property to Cycle. Without this, keyboard navigation doesn’t work properly within the VirtualizingWrapPanel: under certain circumstances, navigating with the Up/Down or Page Up/Page Down keys would move the keyboard focus out of the ListBox, rather than to the next item in the ListBox.

Birds-Eye View of a Virtualizing Panel

So how do you implement a Virtualizing Panel in Silverlight?

There’s a post on my own blog which explains how to create Panels of the standard variety, and I suggest you go there first for a bit of background. Suffice to say here that normal Panels have one job in life: they must layout the elements given to them by their parent control.

Virtualizing Panels have two additional responsibilities. First, they must keep track of their own scroll state. And second, they must manage the creation of UI elements - item containers -for each data item as and when they are needed. They don’t actually create the elements – that is done by an ItemContainerGenerator. They just decide when each item container should be generated.

Implementing a virtualizing panel that can handle UI items of any size you throw at it is a very tricky job. That’s because, in order to accurately position any given element, you need to know the sizes of all the elements that come before it in the list. To know their sizes, you have to measure them, and to be measured, they need to exist. But the very reason for using a virtualizing panel is to avoid creating item unnecessarily. That is why virtualizing panels tend to take one of two easy routes out of the conundrum.

VirtualizingStackPanel’s favoured approach is to use index based scrolling, where it only ever shows whole items, and the scroll bar reflects the current items’ indexes in the list and has nothing to do with their sizes. This has the downside of jerky scrolling if you have large items, since each movement of the scrollbar will move previous items entirely out of view.

Jerky scrolling is avoided by the approach I’ve taken, which is to require that each item displayed by the panel is the same size. This enables us to easily calculate the position of any given item based on its index in the list.

Scrolling Responsibilities

How does a virtualizing panel discharge its scrolling responsibilities?

The panel needs to implement IScrollInfo so its containing ScrollViewer knows that it is going to manage its own scrolling. IScrollInfo’s job splits into two parts: one is to report on the current scroll state through properties like ExtentHeight and ViewportWidth. The Extent represents the logical size of the panel – what it would be if no scrolling was taking place and everything was visible at once. The Viewport is the rectangle of the panel that is actually visible at this moment.

IScrollInfo’s other job is to enable that scroll state to be manipulated, through methods like LineUp and MouseWheelDown. In many implementations, these manipulation methods all end up by calling one of two other methods on IScrollInfo, SetVerticalOffset and SetHorizontalOffset.

Both SetVerticalOffset and SetHorizontalOffset do much the same thing, so I’ll just show the SetVerticalOffset method:

public void SetVerticalOffset(double offset)
{
  if (_isInMeasure)
  {
      return;
  }

  offset = Clamp(offset, 0, ExtentHeight - ViewportHeight);
  _offset = new Point(_offset.X, offset);

  InvalidateScrollInfo();
  InvalidateMeasure();
}

After checking that the given offset is within the scrollable range, the method simply updates a field to show where the panel is scrolled to. Then it notifies its parent ScrollViewer that its state has changed, and, by calling InvalidateMeasure, tells the framework that its layout (i.e. the arrangement of its children) needs updating. The method does one other check: if it discovers that the panel is already in the middle of updating its layout (i.e. _isInMeasure is true) it doesn’t bother changing the scroll offset. This circumvents a problem I’ve encountered where Silverlight tries to scroll to elements newly created during the layout process.

Handling Layout

Layout happens in two parts. First we measure, calculating how much space the panel will occupy taking into account its children; then we arrange, allocating each child its place within the parent. Measuring happens in MeasureOverride, and arranging in ArrangeOverride. Most of the work in the VirtualizingWrapPanel actually happens in MeasureOverride because the panel needs to generate items containers before it can measure them. You might wonder why we need to measure the item containers when we’ve already decided that they have a fixed size? That is so that the item containers can handle their own layout, which happens using the same Measure and Arrange process.

Before I show you the code for MeasureOverride, here’s an outline of what it does:

  1. GetExtentInfo is called to calculate the full height of the list – i.e. how tall it would be if we displayed it all instead of scrolling.
  2. GetLayoutInfo then works out, based on the scroll offset, which item indexes fall inside the visible range. We always make sure to include realize one item before the first visible item, and one item after the last visible item. This enables keyboard scrolling to work properly, because Silverlight can then scroll that item into view, thus triggering the panel to realize the rest of the row.
  3. Any of the current item containers that now fall outside of the visible range are recycled – handed back to the ItemContainerGenerator so that it can reuse them for other items.
  4. MeasureOverride then loops through the visible item indexes and:
    1. Asks the ItemContainerGenerator to create the UI container for the item. If the visibility of the item hasn’t changed, ItemContainerGenerator will give us back the same item container we had before
    2. Calls SetVirtualItemIndex to tag the container with the index in the list that it is now representing. This is used in step 3 the next time round when we check for items that fall outside of the visible range.
    3. Makes sure the container is in the correct place in the Panel’s list of visual children, and thus part of the visual tree. If an item is not in the correct order, keyboard navigation won’t work correctly
    4. Asks the ItemContainerGenerator to Prepare the container – apply its styles, etc.
    5. Measures the item container – this must come after step 4, because until that point the item container isn’t fully initialized, and might not even know what its content is.
    6. Decide where exactly the item container is going to be displayed: its layout rectangle is recorded in the _childLayouts dictionary. This is used by the ArrangeOverride method to arrange each child
  5. Now we can go through the panel’s list of item containers and remove any which were never actually reused. Back in step 3 every existing item container’s VirtualItemIndex tag was set to –1; and in step 4.2, containers which are in use have their tag set to the item of the index they are now representing. So item containers which were never reused will still have a VirtualItemIndex of –1.
  6. Update the scroll state with the new extent height and viewport size so that scroll bars can be drawn in the correct proportions.
  7. Decide what size to report as the desired size for the Panel. The Panel wants to take up all the available space (within the confines of its parent), so in most cases, that’s what it reports. But if it is told that the available size is Infinite, then it returns a size of 0, and by convention, Silverlight takes that to mean “use all the available space”.

Here’s the code:

protected override Size MeasureOverride(Size availableSize)
{
  if (_itemsControl == null)
  {
      return availableSize;
  }
  
  _isInMeasure = true;
  _childLayouts.Clear();

  var extentInfo = GetExtentInfo(availableSize, ItemHeight);

  EnsureScrollOffsetIsWithinConstrains(extentInfo);

  var layoutInfo = GetLayoutInfo(availableSize, ItemHeight, extentInfo);

  RecycleItems(layoutInfo);

  // Determine where the first item is in relation to previously realized items
  var generatorStartPosition = _itemsGenerator.GeneratorPositionFromIndex(layoutInfo.FirstRealizedItemIndex);

  var visualIndex = 0;

  var currentX = layoutInfo.FirstRealizedItemLeft;
  var currentY = layoutInfo.FirstRealizedLineTop;

  using (_itemsGenerator.StartAt(generatorStartPosition, GeneratorDirection.Forward, true))
  {
      for (var itemIndex = layoutInfo.FirstRealizedItemIndex; itemIndex <= layoutInfo.LastRealizedItemIndex; itemIndex++, visualIndex++)
      {
          bool newlyRealized;

          var child = (UIElement)_itemsGenerator.GenerateNext(out newlyRealized);
          SetVirtualItemIndex(child, itemIndex);

          if (newlyRealized)
          {
              InsertInternalChild(visualIndex, child);
          }
          else
          {
              // check if item needs to be moved into a new position in the Children collection
              if (visualIndex < Children.Count)
              {
                  if (Children[visualIndex] != child)
                  {
                      var childCurrentIndex = Children.IndexOf(child);

                      if (childCurrentIndex >= 0)
                      {
                          RemoveInternalChildRange(childCurrentIndex, 1);
                      }

                      InsertInternalChild(visualIndex, child);
                  }
              }
              else
              {
                  // we know that the child can't already be in the children collection
                  // because we've been inserting children in correct visualIndex order,
                  // and this child has a visualIndex greater than the Children.Count
                  AddInternalChild(child);
              }
          }

          // only prepare the item once it has been added to the visual tree
          _itemsGenerator.PrepareItemContainer(child);

          child.Measure(new Size(ItemWidth, ItemHeight));

          _childLayouts.Add(child, new Rect(currentX, currentY, ItemWidth, ItemHeight));

          if (currentX + ItemWidth * 2 >= availableSize.Width)
          {
              // wrap to a new line
              currentY += ItemHeight;
              currentX = 0;
          }
          else
          {
              currentX += ItemWidth;
          }
      }
  }

  RemoveRedundantChildren();
  UpdateScrollInfo(availableSize, extentInfo);

  var desiredSize = new Size(double.IsInfinity(availableSize.Width) ? 0 : availableSize.Width,
                             double.IsInfinity(availableSize.Height) ? 0 : availableSize.Height);

  _isInMeasure = false;

  return desiredSize;
}

It’s a Wrap

That concludes our look at the VirtualizingWrapPanel. Go and check out the whole sample on GitHub, and let us know what you make of it.

Data Virtualization in Silverlight: Digging into VirtualCollection

This is a guest post by Samuel Jack, a freelancer who has worked with us on the new RavenDb Studio features.

The other day I showed you how we enabled the display of massive scrollable lists of items in Silverlight by implementing data virtualization for DataGrids and ListBoxes. If you’ve looked at that sample, you might be curious as to how it all works.

That is the subject for todays post.

What is Virtualization?

If you’re familiar with Silverlight or WPF, you’ll know all about virtualization when it comes to ListBoxes and DataGrids. Virtualization is a trick the framework uses to avoid having to render UI for items in a list that are not visible on screen. It creates UI for items that are visible, and as the user scrolls, it destroys the UI elements for data that has disappeared off screen, and creates new elements for items that have just come into view. VirtualizingStackPanel is one of the classes in WPF and Silverlight that is responsible for handling this on behalf of controls like the ListBox.

VirtualCollection, the class that lies at the heart of our solution, takes this one step further, and applies the same idea to the items in the list themselves. Rather than loading all the items into memory up front, it pulls in just the ones that are needed as the user scrolls (which might involve making a call to a server), and unloads any items which are not likely to be needed again any time soon. Sounds simple enough in theory, but getting this to work in Silverlight involves a bit of trickery.

Appetite reduction therapy, Silverlight style

The first problem we come up against is Silverlight’s I-want-it-and-I want-it-all-NOW attitude to data. If you present a Silverlight ListBox or DataGrid with a simple IList to be its data source, Silverlight will ask the list for its Count, and then proceed to request every single item in the list, irrespective of how much it can actually display at that moment. This isn’t a problem in most use cases, but when we want our VirtualCollection to display lists with potentially billions of data items, that approach just isn’t tenable. So we have to find a way to get Silverlight to slim down its appetite for data.

The trick here, it turns out, is to implement ICollectionView as well as IList. Flaunting this extra interface seems to be enough to persuade Silverlight to back down, and just ask for the items it can actually display, as it needs them. There is some extra work involved, as ICollectionViews are supposed to be responsible for keeping track of the current item selected in a list, but we can handle that.

So with ICollectionView implemented, Silverlight now asks the VirtualCollection for items it wants to display one by one. But that leads to the next problem. When a ListBox, say, first comes knocking on  the door,  VirtualCollection doesn’t have any items in stock. They’re all on the server, and have to be fetched into memory. But VirtualCollection can’t keep Silverlight waiting whilst it goes and does that, or Silverlight will sulk and hang the browser. So VirtualCollection fobs Silverlight off by giving it back a VirtualItem which acts as a place-holder for the real item. Silverlight can then happily create a row in the ListBox, albeit a blank row, because it has no actual data yet. When the real item arrives, fresh from the server, VirtualCollection hands it to the VirtualItem, and through the magic of data binding, the row lights up with the appropriate data.

Let’s take a look at some code, to see how VirtualCollection implements this process.

VirtualItems

We’ll start where Silverlight starts, when it wants to display a particular item: at VirtualCollection’s indexer:

public VirtualItem<T> this[int index]
{
  get
  {
      RealizeItemRequested(index);
      return _virtualItems[index] ?? (_virtualItems[index] = new VirtualItem<T>(this, index));
  }
  set { throw new NotImplementedException(); }
}

Two things are happening here: first up, we’re kicking off a request to realize the item – fetch it from the server, in other words. Secondly, we’re creating a VirtualItem to stand in for the real item – or if we’ve already created a VirtualItem representing that index, we simply return that.

It’s worth saying a word about how we store all these VirtualItems, because they are not stored in a standard List. The reason for that is that there could be billions of them, and we wouldn’t want to allocate an array that big (which is what a List would do). That would be especially wasteful given that the vast majority of the array would never be used, since the user is unlikely to look at every single item.

So VirtualItems are stored in a SparseList, which allows items to be accessed by index as conveniently as if they  were in one big array. Under the covers it actually stores items in lots of smaller arrays which can be more easily garbage-collected when they’re no longer needed.

To Fetch a Page of Data

Let’s chase the process along a little further.

public void RealizeItemRequested(int index)
{
  var page = index / _pageSize;
  BeginGetPage(page);
}

private void BeginGetPage(int page)
{
  if (IsPageAlreadyRequested(page))
  {
      return;
  }

  _mostRecentlyRequestedPages.Add(page);
  _requestedPages.Add(page);

  _pendingPageRequests.Push(new PageRequest(page, _state));

  ProcessPageRequests();
}

Items are never fetched from the server as individuals, but always in pages, so that we can make the most of each round-trip across the wire. The first thing that RealizeItemRequested does, therefore, is work out which page the requested index is part of. Then it puts in a request to fetch that page.

BeginGetPage doesn’t do any fetching directly. After ensuring it isn’t wasting time fetching a page it already has, what it does do is push a fetch-request for the page on to a stack, _pendingPageRequests. It also adds the page to a list it maintains of the mostly recently requested pages. This list is used to manage the cache of pages which are kept in memory. When the list gets too long, the oldest pages are evicted from memory.

Finally BeginGetPage goes off to process the page requests.

private void ProcessPageRequests()
{
  while (_inProcessPageRequests < MaxConcurrentPageRequests && _pendingPageRequests.Count > 0)
  {
      var request = _pendingPageRequests.Pop();

      // if we encounter a requested posted for an early collection state,
      // we can ignore it, and all that came before it
      if (_state != request.StateWhenRequested)
      {
          _pendingPageRequests.Clear();
          break;
      }

      // check that the page is still requested (the user might have scrolled, causing the 
      // page to be ejected from the cache
      if (!_requestedPages.Contains(request.Page))
      {
          break;
      }

      _inProcessPageRequests++;

      _source.GetPageAsync(request.Page * _pageSize, _pageSize, _sortDescriptions).ContinueWith(
          t =>
          {
              if (!t.IsFaulted)
              {
                  UpdatePage(request.Page, t.Result, request.StateWhenRequested);
              }
              else
              {
                  MarkPageAsError(request.Page, request.StateWhenRequested);
              }

              // fire off any further requests
              _inProcessPageRequests--;
              ProcessPageRequests();
          },
          _synchronizationContextScheduler);
  }
}

There are a number of interesting design decisions revealed by ProcessPageRequests, so it’s worth looking at in some detail.

First, why do we use a stack for managing the pending page requests, and not, say, a queue? Well, think about what happens when a user scrolls a long way through a list, and then stops. As they scroll, page requests will be generated, and will take some time to process, so the queue would grow. By the time the user stops scrolling, the queue might be quite long, and it would take a little time for the most recent requests, the ones for the pages the user is now staring at, to be dealt with. By putting the requests in a stack, which is a first-in-first-out data structure, we ensure that the most recent requests, those most likely to be relevant, are dealt with first.

We might also find that there are some requests which are no longer relevant. If the user scrolled a very long way, causing lots of pages to be added to the _mostRecentlyRequestedPages list, there may well be some pages which have to be kicked out of the cache. Any fetch requests for those pages can be safely ignored.

Requests can become irrelevant for another reason too: the collection state may have moved on since the request was made. The collection state is a concept that helps to make sure the VirtualCollection does the right thing at the right time in spite of all the asynchronous activity going on. The state is simply a number which is incremented every time VirtualCollection is notified by the IVirtualCollectionSource that the collection has changed (this is done in UpdateData and Reset). Whenever an asynchronous action is started, say a request to fetch data from the server, VirtualCollection tags the request with the state at that instant. When the response of the asynchronous action comes back, VirtualCollection compares its current state with the state when the action was begun. If the two are different, VirtualCollection knows that the world has moved on, so that response is no longer relevant.

Notice, finally, in this method that when we fire off the asynchronous request to get a page of data – this is where the IVirtualCollectionSource comes into play – we make sure the response is handled in the UI thread;  that is the purpose of passing _synchronizationContextScheduler to ContinueWith. All responses to asynchronous requests are handled in this way to make sure we don’t have any race conditions when updating VirtualCollection’s internal state.

Acting on the Data

What happens when the data is received from the server?

private void UpdatePage(int page, IList<T> results, uint stateWhenRequested)
{
  if (stateWhenRequested != _state)
  {
      // this request may contain out-of-date data, so ignore it
      return;
  }

  bool stillRelevant = _requestedPages.Remove(page);
  if (!stillRelevant)
  {
      return;
  }

  _fetchedPages.Add(page);

  var startIndex = page * _pageSize;

  for (int i = 0; i < results.Count; i++)
  {
      var index = startIndex + i;
      var virtualItem = _virtualItems[index] ?? (_virtualItems[index] = new VirtualItem<T>(this, index));
      if (virtualItem.Item == null || results[i] == null || !_equalityComparer.Equals(virtualItem.Item, results[i]))
      {
          virtualItem.SupplyValue(results[i]);
      }
  }

  if (results.Count > 0)
  {
      OnItemsRealized(new ItemsRealizedEventArgs(startIndex, results.Count));
  }
}

Data coming back from the server is passed to UpdatePage. Before doing anything with the data, the method checks that it is still relevant – that the collection state hasn’t changed, and that the page to which the data belongs hasn’t been evicted from the cache. Then it takes each item and passes it to its corresponding VirtualItem.

Two snippets from VirtualItem will suffice to show what’s going on there:

public class VirtualItem<T> : INotifyPropertyChanged where T : class
{
    // Snip ...
    
   public T Item
   {
       get
       {
           if (!IsRealized && !DataFetchError)
           {
               _parent.RealizeItemRequested(Index);
           }
           return _item;
       }
       private set
       {
           _item = value;
           OnPropertyChanged(new PropertyChangedEventArgs("Item"));
           OnPropertyChanged(new PropertyChangedEventArgs("IsRealized"));
           IsStale = false;
       }
   }

   public void SupplyValue(T value)
   {
       DataFetchError = false;
       Item = value;
   }
}

VirtualItem makes the real item available through its Item property, and when SupplyValue is called this leads to a PropertyChanged event being raised for the Item property, so the Silverlight databinding infrastructure knows it needs to update the UI for that item.

Handling Updates

There’s one other important aspect of VirtualCollection to consider, and that is how it handles changes to the underlying collection of data.

It is the responsibility of the IVirtualCollectionSource to tell VirtualCollection when these changes happen.

The events are handled here:

private void HandleSourceCollectionChanged(object sender, VirtualCollectionSourceChangedEventArgs e)
{
  var stateWhenUpdateRequested = _state;
  if (e.ChangeType == ChangeType.Refresh)
  {
      Task.Factory.StartNew(() => UpdateData(stateWhenUpdateRequested), CancellationToken.None,
                            TaskCreationOptions.None, _synchronizationContextScheduler);
  }
  else if (e.ChangeType == ChangeType.Reset)
  {
      Task.Factory.StartNew(() => Reset(stateWhenUpdateRequested), CancellationToken.None,
                            TaskCreationOptions.None, _synchronizationContextScheduler);
  }
}

There are two types of changes we consider, a Refresh, and a complete Reset. Refresh happens when much of the data may well have remained unchanged, and it makes sense to continue to display it whilst we check with the server. A Reset happens when the entire collection has changed and displaying stale items would be inappropriate.

Notice that we are dispatching the events to be handled on the UI thread (there’s that _synchronizationContextScheduler again), and as we do so, we tag the request with the current collection state so that when we get round to handling the requests we can ignore any that are no longer relevant.

The Reset method contains nothing of great interest, but UpdateData explains something quite important:

protected void UpdateData(uint stateWhenUpdateRequested)
{
  if (_state != stateWhenUpdateRequested)
  {
      return;
  }

  _state++;

  MarkExistingItemsAsStale();

  _fetchedPages.Clear();
  _requestedPages.Clear();

  UpdateCount();

  var queryItemVisibilityArgs = new QueryItemVisibilityEventArgs();
  OnQueryItemVisibility(queryItemVisibilityArgs);

  if (queryItemVisibilityArgs.FirstVisibleIndex.HasValue)
  {
      var firstVisiblePage = queryItemVisibilityArgs.FirstVisibleIndex.Value / _pageSize;
      var lastVisiblePage = queryItemVisibilityArgs.LastVisibleIndex.Value / _pageSize;

      int numberOfVisiblePages = lastVisiblePage - firstVisiblePage + 1;
      EnsurePageCacheSize(numberOfVisiblePages);

      for (int i = firstVisiblePage; i <= lastVisiblePage; i++)
      {
          BeginGetPage(i);
      }
  }
  else
  {
      // in this case we have no way of knowing which items are currently visible,
      // so we signal a collection reset, and wait to see which pages are requested by the UI
      OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset));
  }
}

Remember that the purpose of UpdateData is to check that each item in the collection remains up to data. But it has to be frugal: it can’t go off to the server to ask about every possible item. Once again, it needs to limit itself to just those items that are visible.

One way of doing this, the way VirtualCollection worked when I first implemented it, is to raise a CollectionChanged event with an NotifyCollectionChangedAction of Reset. This signals to any listening ListBoxes or DataGrids that the whole collection has changed and they should rebuild their display of the data. They then respond by calling VirtualCollection’s indexer for each index  in the visible range, and thus VirtualCollection finds out which pages it should refresh. In many circumstances this is the only way of refreshing the collection because we have no other way of knowing exactly which items are visible.

But it is a rather sledgehammer approach, rebuilding all the rows in the UI, when we could just update the data within those rows, if only we knew which items are currently visible. So before resorting to a collection reset, VirtualCollection will try another approach. It tries asking nicely anybody who might be listening which items are visible.

It does this by raising the QueryItemVisibility event. To make sure this request doesn’t go unheeded, you need to attach a behaviour to your DataGrid or ListBox which will respond to the event. As part of the code sample, I have provided the ProvideVisibleItemRangeFromDataGridBehavior and the ProvideVisibleItemRangeFromItemsControlBehavior – though note that currently, this last behavior only does anything useful if the ItemsControl (usually in the form of a ListBox) is using a VirtualizingWrapPanel to display its items. See MainPage.xaml in the sample for an example of how to attach these behaviors.

The Final Word

And that concludes our dissection of VirtualCollection. Remember that you can try out a live demo here, and get all the code on github so that you can put it to use in your own projects.

Do let us know how you get on.

Next time: exploring the VirtualizingWrapPanel.

Data Virtualization, Lazy Loading, Stealth Paging–whatever you want to call it: Here’s how to do it in Silverlight

This is a guest post by Samuel Jack, a freelancer who has worked with us on the new RavenDb Studio features.

As I have previously mentioned, the management UI for RavenDb (fondly known as the Studio) has a hot new feature. A revolutionary interface for viewing huge sets of documents which uses (gasp!) scrollbars instead of paging controls. To make this feature kind on the server (we’re averse to killing Ravens of any kind), the UI requests documents just in time, as the users scrolls each part of the list into view.

In the next few blog posts, I want to share with you how we made that work in Silverlight. I’ve published a sample on GitHub, including code under a license that enables you to use it in your own projects.

Today, to whet your appetites, I’ll show you how easy it is to create user interfaces that display huge collections of items. In future posts, I’ll dig into the magic which makes it all work.

Flicking through the Netflix Catalogue

The sample is a simple browser for the Netflix movie catalogue, which lets you scroll through all the available titles, or just those containing the search terms you enter. I chose this example because Netflix have an OData API which makes the catalogue very easy to query.

As well as the details view, shown below, the sample also has a card view which demonstrates our VirtualizingWrapPanel, also included in the sample.

Netflix Browser Sample

Click the image to see the sample in action. (There’s a known issue with images not showing up, because of how I’m hosting the files in Amazon S3 – run the sample locally to see it in its full glory)

Holding it all together: VirtualCollection

The key player in this setup is the VirtualCollection class, which you’ll find in the sample code. VirtualCollection acts as the data source which you bind to your ListBoxes or DataGrids, and it has the responsibility of coordinating data retrieval from the server.  The actual fetching of data is handled by a class implementing IVirtualCollectionSource: each VirtualCollection is associated with one IVirtualCollectionSource.

public class MainViewModel : ViewModel
{
   private NetflixTitlesSource _source;
   public VirtualCollection<Title> Items { get; private set; }

   public MainViewModel()
   {
       _source = new NetflixTitlesSource();
       Items = new VirtualCollection<Title>(_source, pageSize: 20, cachedPages: 5);
   }

   protected override void OnViewLoaded()
   {
       Items.Refresh();
   }
}

Using the VirtualCollection is very straightforward. You create an instance, telling it what kind of items it will be managing (Title objects in this case), and supplying it with the IVirtualCollectionSource (implemented here by NetflixTitlesSource – more on this in a moment).

You also tell it what page size to use, in other words, how many items it should fetch in each call to the server. You can determine an appropriate page size by figuring out how many items are likely to fill one screen of your application. VirtualCollection also wants to know how many such pages it should cache in memory: if your items are small, and change infrequently, caching more pages might well be a good idea.

Supplying the goods: IVirtualCollectionSource

When it comes to implementing IVirtualCollectionSource, you have to do a little more work, but nothing too arduous. In fact, there’s a base class which does much of the housekeeping for you.

An IVirtualCollectionSource has two main responsibilities: it needs to inform the VirtualCollection of the total number of items, and it must supply pages of items when requested. It should also let the VirtualCollection know if anything changes, by means of the CollectionChanged event.

Here’s what the key methods of the NetflixTitlesSource look like:

public class NetflixTitlesSource : VirtualCollectionSource<Title>
{
    private string _search;

   public string Search
   {
       get { return _search; }
       set
       {
           _search = value;
           Refresh(RefreshMode.ClearStaleData);
       }
   }
   
   protected override Task<int> GetCount()
   {
       return GetQueryResults(0, 1, null)
           .ContinueWith(t => (int)t.Result.TotalCount, TaskContinuationOptions.ExecuteSynchronously);
   }

   protected override Task<IList<Title>> GetPageAsyncOverride(int start, int pageSize, IList<SortDescription> sortDescriptions)
   {
       return GetQueryResults(start, pageSize, sortDescriptions)
           .ContinueWith(t => (IList<Title>)((IEnumerable<Title>)t.Result).ToList(), TaskContinuationOptions.ExecuteSynchronously);
   }

    private Task<QueryOperationResponse<Title>> GetQueryResults(int start, int pageSize, IList<SortDescription> sortDescriptions)
   {
       var context = new NetflixCatalog(new Uri("http://odata.netflix.com/Catalog"));

       var orderByString = CreateOrderByString(sortDescriptions);
       var query = context.Titles
           .AddQueryOption("$skip", start)
           .AddQueryOption("$top", pageSize)
           .IncludeTotalCount();

       if (!string.IsNullOrEmpty(Search))
       {
           query = query.AddQueryOption("$filter", "(substringof('" + Search + "',Name) eq true) and (BoxArt/SmallUrl ne null)");
       }
       else
       {
           query = query.AddQueryOption("$filter", "(BoxArt/SmallUrl ne null)");
       }

       if (orderByString.Length > 0)
       {
           query = query.AddQueryOption("$orderby", orderByString);
       }

       return Task.Factory.FromAsync<IEnumerable<Title>>(query.BeginExecute, query.EndExecute, null)
           .ContinueWith(t => (QueryOperationResponse<Title>)t.Result, TaskContinuationOptions.ExecuteSynchronously);
   }
   
   ...
}

As you can see, the majority of the work work lies in preparing the query that we send off to the Netflix OData API, and executing that asynchronously . By comparison, the code needed to support IVirtualCollectionSource (just two methods, GetCount and GetPageAsyncOverride) is very small.

A VirtualCollectionSource can communicate with its parent VirtualCollection by means of the CollectionChanged event if something happens that means the collection needs to be updated. In this case, changing the Search property is going to change the entire collection, so we call the Refresh method on the base class, which raises that event.

There are two kinds of Refresh available. The Search property is using the ClearStaleData mode, which means that the whole collection will be cleared whilst new results are being loaded. This is obviously the right choice here, because continuing to show results for “Harry Potter” after the user has just changed the search terms to “Indiana Jones” would be confusing.

The other kind of Refresh is called PermitStateDataWhilstRefreshing. This mode is good if your server has just notified you that new items have been added. Existing items are still valid, so there’s no need to clear down the whole collection whilst you find out about the new items.

Making it Presentable

Finally, here’s some XAML to show how we might display the contents of the VirtualCollection in a DataGrid

<sdk:DataGrid ItemsSource="{Binding Items}" Grid.Row="1" Margin="10,0" IsReadOnly="True" AutoGenerateColumns="False" RowHeight="50">
 <i:Interaction.Behaviors>
     <VirtualCollection1:ProvideVisibleItemRangeFromDataGridBehavior/>
 </i:Interaction.Behaviors>
 <sdk:DataGrid.Columns>
     <sdk:DataGridTemplateColumn Header="Image">
         <sdk:DataGridTemplateColumn.CellTemplate>
             <DataTemplate>
                 <Image Stretch="None" 
                         Source="{Binding Item.BoxArt.SmallUrl, Converter={StaticResource Converter_StringToImage}}" 
                        Margin="5" />
             </DataTemplate>
         </sdk:DataGridTemplateColumn.CellTemplate>
     </sdk:DataGridTemplateColumn>
     <sdk:DataGridTextColumn Binding="{Binding Item.Name}" Header="Name" CanUserSort="True" SortMemberPath="Name"/>
     <sdk:DataGridTextColumn Binding="{Binding Item.AverageRating}" Header="Average Rating" CanUserSort="True" SortMemberPath="AverageRating"/>
     <sdk:DataGridTextColumn Binding="{Binding Item.ReleaseYear}" Header="Release Year" CanUserSort="True" SortMemberPath="ReleaseYear"/>
 </sdk:DataGrid.Columns>
</sdk:DataGrid>

The main thing to notice here is that all the binding paths start with Item. That’s because each data item in the VirtualCollection is wrapped in a VirtualItem, and made available through VirtualItem’s Item property. VirtualCollection does this so that it can put off fetching any actual items until the last possible moment – more on this in another blog post.

Notice also that we can enable sorting on some of the columns by setting a SortMemberPath. When you click a column header, the DataGrid will pass that value to the VirtualCollection, and VirtualCollection will pass it on to the IVirtualCollectionSource which can then use it to instruct the server in how to sort the results it returns.

Try it out and Get Back to Us

Now you’ve seen how easy it is, you have no excuse. Go and purge those paging controls from your apps! Long live scrollbars!

All the code you need is up on GitHub - look in the VirtualCollections folder. And don’t forget to check back here shortly, as we take a look behind the scenes to find out how VirtualCollection works.

We’d love to hear how you get on with this in your own projects.

New in RavenDb Studio 1.2: Low-Bandwidth mode

This is a guest post by Samuel Jack, a freelancer who has worked with us on the new RavenDb Studio features.

We saw earlier that in RavenDb Studio 1.2 you can now scroll through the entire list of documents in a Raven database, and the Studio will page in documents just in time behind the scenes. But those documents still need to be fetched from the server, and if your documents are very large (into the 100s of kilobytes and more), or you are working over a very slow connection, fetching the documents will take some time, and make for a frustrating experience.

Which is why we’ve introduced a low-bandwidth, or “Id only” mode for viewing document lists. You switch to it by clicking the Document View Style button on the top right of any Document list.

image

In this mode, the Studio will only fetch the metadata for each document, saving a lot of time when contacting the server and allowing the lists to be displayed much more quickly. To see the whole document, you simple double-click it, or click the Pencil icon, as in Details or Card mode.

Published at

Originally posted at

New in RavenDb Studio 1.2: Quick Links

This is a guest post by Samuel Jack, a freelancer who has worked with us on the new RavenDb Studio features.

Here’s another small, but perfectly formed feature in RavenDb Studio 1.2.

The header bar now sports a “quick links” area:

image

Two things to see here. First there’s the new New button which speeds you on your way to create a new document, index or query.

image

And there’s the “Go To Document” bar. Type the Id of a document, hit Enter, and you’ll be taken straight to it. Even better, if you don’t know the exact name of the document, start typing its Id, and you’ll be prompted with a list of possibilities.

image

Published at

Originally posted at

New in RavenDb Studio 1.2: Paging through result sets

This is a guest post by Samuel Jack, a freelancer who has worked with us on the new RavenDb Studio features.

A small, but very useful enhancement today, in our on-going tour through the new UI features for RavenDb 1.2

image

Notice the paging controls highlighted in the screenshot? These now appear whenever you arrive at a document via a documents list. This might be from the Documents or Collections page, for example, or from the results list of a Query. These controls let you page through all the documents in that particular list.

Say, for example, that you have misspelt an Artist’s name and you need to make a quick correction in the database. You could create a query to find all the documents containing the misspelling, and then double-click the results list to edit the first one. You make the correction, and save the document. What do you do next? Before, you would have had to hit the back button to go back to the query page to find the next document. Now you can just hit the Next button in the Edit page to work through the erroneous documents one-by-one.

New in RavenDb Studio 1.2: Better document editing

This is a guest post by Samuel Jack, a freelancer who has worked with us on the new RavenDb Studio features.

Continuing in our tour of the new features in RavenDb Studio for 1.2, today we come to the document editor.

Outlining Support

Since Documents are RavenDb’s raison d'etre, having a good document editing and viewing experience is vital. Which is why we’ve put a good bit of work into enhancing the Studio’s document editor.

The first thing you’ll notice is that the editor now has outlining support, so that you can expand and collapse collections and nested objects within a document:image_thumb35

Clicking the Outlining button collapses all the top level nodes, and clicking it again expands them all.

If you have many documents containing large collections, you may find the new Auto-Collapse Collections feature useful. When this mode is selected (find it by clicking the drop-down arrow on the Outlining button) the Document editor will automatically collapse all collections when you first open a document in the editor page.

JSON Syntax Error Highlighting

The next new feature you probably won’t notice, until you start modifying a document. Then you’ll see that the editor now has real-time JSON syntax checking:

image_thumb37

Any errors in the document are highlighted with red squiggles, and an error list at the bottom summaries all the errors detected, with double-click taking you to the mal-formed part of the document, as you’d expect.

Document Hyperlinks

Here’s a handy new feature for checking relationships between documents. The document editor now turns any document Ids it encounters within a document into hyperlinks that you can navigate to with a Shift-Click (or Shift-Ctrl-Click to open in a new page):

image

 

Not only is this good for jumping between related documents. It also lets you see at glance if your document references are valid, since the document editor only hyperlinks Ids of documents that actually exist in the database.

Published at

Originally posted at

New in RavenDb Studio 1.2: Enhancements to querying

This is a guest post by Samuel Jack, a freelancer who has worked with us on the new RavenDb Studio features.

Welcome back to our tour of the new features in RavenDb Studio 1.2. Up for examination today is the Query page.

Remembering Recent Queries

Over on the RavenDb issue tracker, we were asked to “treat queries as first-class citizens”. I don’t know what other rights queries might demand, but one right we decided to give them was the right to be remembered. Enter the Recent Queries dropdown:

image

Every query you execute is remembered, and you can go back to a previous query just by clicking its entry in the list. As with most navigation buttons in the Studio, clicking with the Control key held down will open the query in a new window.

Up to 100 of your most recent queries are remembered for next time after you close your browser. Though if you should have been querying with some nefarious purpose, there’s always the “Clear History” button to cover your tracks.

If you have any queries that are especially near and dear to your heart, you can Pin them to make sure they always stay at the top of the list and don’t get swept away if the rest of the history is cleared.

 

Other Changes

Along with this, there are a number of smaller enhancements

  • Sort By now works for dynamic queries when querying over a particular collection.
  • Similarly, you now get Intellisense in the query editor for dynamic queries when you have limited the query to a specific collection. In other words, press Ctrl-Space in the query editor, and you can choose from a list of properties in documents in that collection.
  • Sort By options are now remembered when you navigate back to a previous query.
  • You can now choose the default operator used to combine clauses in your query .
  • A Show Fields option has been added to help you understand what’s going on with your indexes. When pressed, the results view will show the fields in matching index entries, rather than the actual documents. In a similar vein, there’s a new Skip Transform button which shows up when an Index has a Transform stage defined. Pressing Skip Transform will show you the documents matching a query, rather than the results of the transform.
  • For all you JSON-heads out there, the Query page shows you what url the client is calling to get the results from the server, and if you press the link button next to the url, you’ll see the raw JSON that the server sends back.

New in RavenDb Studio 1.2: Scrollable Document Lists, and a Details View

This is a guest post by Samuel Jack, a freelancer who has worked with us on the new RavenDb Studio features.

Along with the exciting new features in the core RavenDb database, we’ve been hard at work improving RavenDb Studio, the management UI for RavenDb. Over the next few days, I want to share with you some of the major enhancements.

Death to paging!

Almost the very first thing we did when we started work on the Studio for 1.2 was to kill the paging interface for document lists. And hammer nails in its coffin. Please give a big round of applause as we introduce … scrollbars!

image_thumb3

That’s right: scrollbars that let you scroll through your entire collection of documents, all 2,147,483,647 of them. Don’t be alarmed, you db admin types: the Studio is doing the right thing and paging in documents only as you scroll to them (and paging out documents you’ve scrolled past, so they don’t clog up memory).

 

Digging into the Document Details

By popular request, we’ve also added a details view as an alternative to the existing card view. Click the Document View Style button in the top right corner of any documents list and you can toggle between card view and details view (which ever setting you choose is remembered between sessions).

image_thumb12

Which columns get shown in details view? Well, naturally the Studio allows you to choose for yourself which columns you want to see. Or you can let the Studio choose for you.

 

 

Pick your own Columns

To choose your own columns, right click on one of the column headers and click the Choose Columns menu item. You’ll see a dialog where you can choose the columns you’d like:

image_thumb17

When you click in the Binding column to add a new Column, the Studio will help you out by showing a list of all the properties (and nested properties) in the visible documents:

image_thumb19

Notice that you can bind to document meta-data too!

 

Automatic Column Selection

To save you having to pick columns yourself for each set of documents you view, the Studio has a special IntelliPickNot TM feature. It will inspect the properties of documents currently scrolled into view, and do its best to pick a sensible set of columns. And as you scroll, and different types of documents come into view, it will automatically update the column set.

How does it choose columns? Good question. Let me check the documentation source code…

Basically it picks the top 6 most commonly occurring properties in the current set of documents. But it will give certain properties a boost in importance making them more likely to be picked. The Studio has a built in list of property names that it will look for. So any property that includes the words “Name”, “Title” “Description”, or “Status” will be prioritised. You can customise this list of priority columns for a database  by creating a document  in that database called “Raven/Studio/PriorityColumns”, like this:

image_thumb22

PropertyNamePattern is interpreted as a Regular Expression so you can get quite sophisticated in properties you want to boost.

The IntelliPick algorithm has one more trick up its sleeve. Have a look at this screenshot:

image_thumb30

Do you see it? If you include particular properties in a query, or as a Sort By option, those properties will get boosted in importance. In the example shown, Artist.Name is part of the query, and results are sorted by Genre.Name, so both these properties are picked out to appear as columns. Similarly, when viewing results for a particular index, any properties that are included in the Index name will also be given a boost in priority.

Would it be possible to have a web-browser based editor for an Hebrew text?

It took me two minutes to understand what is going on. What?! The text is not the same! Yes! I rendered some text in a HTML form, and when I got the text back to the server, without changing it by myself, the text HAS BEEN CHANGED!

Huh? How can this happen?

This is an HTML with a form tag, contained some Hebrew text, and this is what I used to send the text back to the server:

   1:  <!doctype html>
   2:  <html>
   3:  <head>
   4:      <meta charset="utf-8">
   5:      <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
   6:      <title>What?!</title>
   7:      <meta name="viewport" content="width=device-width, initial-scale=1.0">
   8:  </head>
   9:  <body>
  10:      <form method="POST" action="/api/snippets">
  11:          <input name="Content" type="hidden" value="לְשֵׁם יִחוּד קֻדְשָׁא בְּרִיךְ הוּא וּשְׁכִינְתֵּהּ">
  12:          <input type="submit">
  13:      </form>
  14:  </body>
  15:  </html>

This is what is sent from the client:

לְשֵׁם יִחוּד קֻדְשָׁא בְּרִיךְ הוּא וּשְׁכִינְתֵּהּ

And this is what the server got back:

לְשֵׁם יִחוּד קֻדְשָׁא בְּרִיךְ הוּא וּשְׁכִינְתֵּהּ

Hua? You must be furrowing your forehead… These two snippets look exactly the same for me!

No they do not. Apparently.

The way that I knew that they different, is that I edited a snippet in the database from a web page, and than tried to query for the original snippet’s text. What surprised me that nothing matched the query. I knew that RavenDB supports Unicode by its core, but in order to be sure I wrote a test that proved it. (And committed it to the ravendb source code, so I’ll be sure that this won’t brake in the future too).

So this text must be not the same. But what is different? At that point I decided to print out the code of each char using the following gist test helper. Here is the result:

Position     Expected       Actual
---------------------------------------------------------
0         ל    (\u05DC)     ל    (\u05DC)
1         ְ    (\u05B0)     ְ    (\u05B0)
2         ש    (\u05E9)     ש    (\u05E9)
3  *      ׁ    (\u05C1)     ֵ    (\u05B5)
4  *      ֵ    (\u05B5)     ׁ    (\u05C1)
5         ם    (\u05DD)     ם    (\u05DD)
6         \s   (\u0020)     \s   (\u0020)
7         י    (\u05D9)     י    (\u05D9)
8         ִ    (\u05B4)     ִ    (\u05B4)
9         ח    (\u05D7)     ח    (\u05D7)
10        ו    (\u05D5)     ו    (\u05D5)
11        ּ    (\u05BC)     ּ    (\u05BC)
12        ד    (\u05D3)     ד    (\u05D3)
13        \s   (\u0020)     \s   (\u0020)
14        ק    (\u05E7)     ק    (\u05E7)
15        ֻ    (\u05BB)     ֻ    (\u05BB)
16        ד    (\u05D3)     ד    (\u05D3)
17        ְ    (\u05B0)     ְ    (\u05B0)
18        ש    (\u05E9)     ש    (\u05E9)
19 *      ׁ    (\u05C1)     ָ    (\u05B8)
20 *      ָ    (\u05B8)     ׁ    (\u05C1)
21        א    (\u05D0)     א    (\u05D0)
22        \s   (\u0020)     \s   (\u0020)
23        ב    (\u05D1)     ב    (\u05D1)
24 *      ּ    (\u05BC)     ְ    (\u05B0)
25 *      ְ    (\u05B0)     ּ    (\u05BC)
26        ר    (\u05E8)     ר    (\u05E8)
27        ִ    (\u05B4)     ִ    (\u05B4)
28        י    (\u05D9)     י    (\u05D9)
29        ך    (\u05DA)     ך    (\u05DA)
30        ְ    (\u05B0)     ְ    (\u05B0)
31        \s   (\u0020)     \s   (\u0020)
32        ה    (\u05D4)     ה    (\u05D4)
33        ו    (\u05D5)     ו    (\u05D5)
34        ּ    (\u05BC)     ּ    (\u05BC)
35        א    (\u05D0)     א    (\u05D0)
36        \s   (\u0020)     \s   (\u0020)
37        ו    (\u05D5)     ו    (\u05D5)
38        ּ    (\u05BC)     ּ    (\u05BC)
39        ש    (\u05E9)     ש    (\u05E9)
40 *      ׁ    (\u05C1)     ְ    (\u05B0)
41 *      ְ    (\u05B0)     ׁ    (\u05C1)
42        כ    (\u05DB)     כ    (\u05DB)
43        ִ    (\u05B4)     ִ    (\u05B4)
44        י    (\u05D9)     י    (\u05D9)
45        נ    (\u05E0)     נ    (\u05E0)
46        ְ    (\u05B0)     ְ    (\u05B0)
47        ת    (\u05EA)     ת    (\u05EA)
48 *      ּ    (\u05BC)     ֵ    (\u05B5)
49 *      ֵ    (\u05B5)     ּ    (\u05BC)
50        ה    (\u05D4)     ה    (\u05D4)
51        ּ    (\u05BC)     ּ    (\u05BC)


Assert.Equal() Failure
Position: First difference is at position 3
Expected: לְשֵׁם יִחוּד קֻדְשָׁא בְּרִיךְ הוּא וּשְׁכִינְתֵּהּ
Actual:   לְשֵׁם יִחוּד קֻדְשָׁא בְּרִיךְ הוּא וּשְׁכִינְתֵּהּ

What we can see from this test that when the browser sends back the above text, it is modifying it. To be more exact, when an Hebrew letter has two Niqqud characters (which they’re like the vowels characters in English, they let you know how to pronounce a specific sign), the browser replace the order of them.

In the above text we had Shin (ש) followed by a ShinDot (\u05C1) followed by a Zeire (\u05B5), but the browser replaced the order to the two Niqqud characters, so when the text posted back to the server, the server got Shin (ש) followed by a Zeire (\u05B5) followed by a ShinDot (\u05C1).

After realizing that this really happens, I than wanted to know why? What is the reason for this behavior?

So I emailed a friend of mine, Efraim Feinstein, which was kind to give me some background about his project Open Siddur and to discuss some related topics, with the above question.

What he answered is that browsers do some sort of Unicode normalization, which in this case is the Normalization Form C (NFC) probably.

So I dig up more about the Unicode normalization, and I found out this can be serious problem when editing Hebrew text with Niqqud. While the actual case of when this will impact the end result of how the text is displayed is actually rare, it still exists as outlined it the following document (page 9). Besides, this overwrites the Hebrew font convention of the Niqqud order, as mentioned in same document:

… most users familiar with Hebrew would agree that the dagesh should, logically and linguistically, precede the vowel and the cantillation mark, and most would also agree that the vowel should precede the cantillation mark

Searching the web for a solution didn’t yielded any solution. I tried to see if I can come with a custom normalization that will de-normalize the characters to the original order, but I concluded quickly that this not an easy task to complete. Based on the recommended mark ordering here (page 12), I can see that Hiriq is precede Patah, and this will lean the error in word ירושלים which will change its pronunciation from yerušālayim to yerušālaim, as described in page 9 here.

So, does this mean that I cannot use a web page in order to edit such Hebrew text? Really?

or can you come with some solution?

A solution as I can see it can be either:

  1. A way to avoid the normalization action made by browser (Google Chrome).
  2. A de-normalization algorithm to revert the Niqqud to the original – as quated from above from this document, page 8.

I also posted this question to Stack Overflow, you can answer it there if it more convenient for you.

Tags:

Published at

Originally posted at

Single point of failure

Cross posted from http://www.code972.com/blog/2012/06/single-point-of-failure/

September 1st, 1983. Korean Airlines flight 007 from New York City to Seoul disappeared a couple of hours after take-off. Only later was it discovered that the plane deviated from its original route; instead of flying through air corridor R-20, it entered Soviet airspace and was shot down by a Soviet interceptor. All 269 people on board were killed.

During an investigation conducted by the National Transportation Safety Board (NTSB) , it was made clear the plane was cruising way northern than it should have been. Instead of flying above international waters, the plane somehow entered Soviet airspace, enforcing them to gun it down, thinking it was a plane in a spying mission. How did the plane deviate that much from its assigned route? NTSB came up with two possible options, both pointed at human error.

The first option was typing the aerial waypoints incorrectly. These are latitude / longitude pairs the co-pilot enters and the captain validates, and they form the flight's route. Mistyping one digit may take the plane way off its planned route, possibly making it enter hostile territories. NTSB also mentioned another possibility - not turning the coordinates-based auto-pilot (INS) on, and instead flying with the Magnetic Heading auto-pilotmode. The Magnetic Heading option is always on during take-off, so it would require the pilot to remember to change the auto-pilot mode. If he failed to do so, the INS system would not use the coordinates they typed to guide the plane, since it would be off.

The captain on board of KAL flight 007 had years of flying experience. 10+ years in KAL, and many years before that in the air-force. Therefore, NTSB deemed the second option "less likely". They thought it is much more likely for typing a number incorrectly, and not caring to verify it, than it was for a very experienced pilot to flip a switch right after take-off. It is a switch you flip on every flight, after all.

Years later, after the Soviet Union fell apart and the investigation was able to conclude using the original black-box from the plane, the real reason for the deviation of the flight was discovered. It turns out the captain forgot to switch the INS system on, so the plane was cruising using the Magnetic Heading. Had he remembered to switch the INS system on in any point during the flight, he would have caught the error and redirect the plain to its assigned route, probably avoiding death.

In the software world we have a lot of slogans, methodologies and names for patterns. Single point of failure is not just a slogan. In this case, the system had many single points of failures, and it was only a matter of time before before it would have mortal consequences. I'm pretty sure this is not the only time the pilot forgot to switch to INS mode; it is the only time (that I know of) it caused death. Of an entire 747.

The Single Point of Failure in this case is not a system crash, or a bottleneck. It is about assuming the operator will always remember to do the right thing at the right time. And that is wrong, even if your user has 10+ years of flawless experience. I'm consciously avoiding the discussion on the poor UX of the auto-pilot system, and this is why I left some details relating to it out. Yes, you can get away from this using some UX tricks, like checklists or blinking signs or whatever, but then in the best scenario you are just making it less likely to happen, which is not good enough.

If it is the common practice to always first have magnetic heading mode turned on, and then switch to something else (not necessarily INS), then having it as a dedicated mode is a wrong assumption. But here I'm talking UX again, so we'll stop here.

When designing any software, not to mention complex systems, don't ever allow for a single point of failure, and don't ever assume it is only about preventing bottlenecks or crashes. In some systems you might save lives, but in most systems you'll just save yourself a lot of support calls.

You can read the full story, with all the details, in the Wikipedia page. National Geographic had a chapter on it in the excellent "Air Crash Investigation" series, which you can watch here. The image above is from that show.

Published at

Originally posted at

Fade-Trimming TextBlocks in Silverlight and WPF

This is a guest post by Samuel Jack, who had done a lot of work on the new UI for RavenFS.

Using ellipsis is so last year. When did you last see the cool HTML 5 kids writing … when they couldn’t fit all their text in a column? They’ve got this fancy new feature, powered by CSS3, where text that doesn’t quite fit simply fades out as it reaches the edge of the text block. Like this (with thanks to the QuickUI Catalog)

image

Subtle! It looks much prettier, and it means you can fit in an extra three characters of real text instead of the …s.

Silverlight’s future might be uncertain, but I want to show you that it isn’t ready to roll over and die just yet. We can have those fancy fade-trimming effects too!

As part of the work I’m doing on the UI for RavenFS I implemented Fade-trimming to make our DataGrids look smarter, and I thought it would be nice to share. So I’ve packaged up all you need it into an easy-to-use class called FadeTrimming.cs which works in Silverlight and WPF, and given you two demo projects to show you how to get started.

How to use it

Include the FadeTrimming.cs file in your project. Add a namespace reference in the root element of your XAML file like this:

<Window x:Class="FadeTrimmingDemo.Wpf.MainWindow"
        xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
        xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
        
        xmlns:b="clr-namespace:SilverlightEffects"
        
        Title="MainWindow" Height="350" Width="525">

You can then enable Fade Trimming on individual TextBlocks like this:

<TextBlock b:FadeTrimming.IsEnabled="True">
    Lorem ipsum dolor sit amet, consectetur adipiscing elit. 
</TextBlock>

or using a Style, like this:

<Style x:Key="Style_TextBlock_FadeTrimming" TargetType="TextBlock">
    <Setter Property="b:FadeTrimming.IsEnabled" Value="True"/>
</Style>

If you want to use it in a column of a DataGrid, try this:

<sdk:DataGridTemplateColumn Header="Name" Width="*">
     <sdk:DataGridTemplateColumn.CellTemplate>
         <DataTemplate>
             <TextBlock Text="{Binding Item.Name}" 
                        b:FadeTrimming.IsEnabled="True" 
                        b:FadeTrimming.ShowTextInToolTipWhenTrimmed="True"
                        VerticalAlignment="Center"
                        Margin="4"/>
         </DataTemplate>
     </sdk:DataGridTemplateColumn.CellTemplate>
 </sdk:DataGridTemplateColumn>

This last example shows another useful feature. Set b:FadeTrimming.ShowTextInToolTipWhenTrimmed="True", and when the text is trimmed, the TextBlock will automatically start showing the full text in its tool tip.

A couple of things to note:

  • If TextBlock.TextWrapper is set to NoWrap, then the TextBlock will be fade-trimmed on the left. Otherwise, when text wrapping is switched on, it will be fade-trimmed at the bottom.
  • You’ll probably want to set TextOptions.TextHintingMode=”Animated”. Otherwise there’s a strange rendering glitch in Silverlight where the font appears darker when it is fade-trimmed. I don’t think my code’s to blame: it seems to happen whenever text is rendered using anything other than a SolidColorBrush. TextHintingMode is an inherited property, so you can set it on a root element and it will flow to the children.
  • When TextBlocks are used in certain places (e.g. as direct children of Grids, with GridSplitters involved) they don’t seem to get the events they need to know when to update the Fade Trimming. The solution is to wrap the TextBlock in another element, usually a Border, and then everything works as it should.

How it works

There are two key ingredients to this fade-trimming trick:

  1. Knowing when and where the TextBlock has been clipped
  2. Setting the TextBlock’s Foreground brush to a LinearGradientBrush that fades to transparent just before the clip boundary.

Finding out if and how an element has been clipped in Silverlight turns out to be very easy. You just call

var layoutClip = LayoutInformation.GetLayoutClip(_textBlock);

layoutClip will be null if the element is not clipped. Otherwise its Bounds property will be set to a rectangle outlining the visible area of the element.

Fading at the edges

Setting up the TextBlock’s Foreground Brush is slightly more involved. Here’s how we do it for a horizontally clipped TextBlock:

private LinearGradientBrush GetHorizontalClipBrush(double visibleWidth)
{
    return new LinearGradientBrush
    {
        // set MappingMode to absolute so that
        // we can specify the EndPoint of the brush in
        // terms of the TextBlock's actual dimensions
        MappingMode = BrushMappingMode.Absolute,
        StartPoint = new Point(0, 0),
        EndPoint = new Point(visibleWidth, 0),
        GradientStops =
            {
                new GradientStop()
                    {Color = _foregroundColor, Offset = 0},
                new GradientStop()
                    {
                        Color = _foregroundColor,
                        // Even though the mapping mode is absolute,
                        // the offset for gradient stops is always relative with
                        // 0 being the start of the brush, and 1 the end of the brush
                        Offset = (visibleWidth - FadeWidth)/visibleWidth
                    },
                new GradientStop()
                    {
                        Color = Color.FromArgb(0, _foregroundColor.R, _foregroundColor.G, _foregroundColor.B),
                        Offset = 1
                    }
            }
    };
}

First we create a LinearGradientBrush and set its MappingMode to BrushMappingMode.Absolute. This allows us to set the StartPoint and EndPoint in terms of coordinates on the TextBlock itself. We set StartPoint to the top left corner of the TextBlock, and EndPoint to the edge of the visible region.

Then we set three GradientStops, the first two in the solid foreground color, and the last one, at the edge of the clipping region, completely transparent. One oddity with GradientStops is that even though the brush mapping mode is Absolute their values are still relative – but relative to the StartPoint and EndPoint of the brush. So Offset = 0 maps to the StartPoint of the Brush, and Offset = 1 maps to the end point of the brush. We want the fade to start a few pixels in from the edge of the visible area, so we set the Offset of the second GradientStop to (visibleWidth - FadeWidth)/visibleWidth.

Other points of interest

There are a couple of other points of interest, if you fancy inspecting the code.

One is that the clipping requirements of the TextBlock can change for many reasons. It could change because the length of the text changing, or because the font size or style changes, or because the size of the text block itself changes (imagine it is in a column of a DataGrid that the user is expanding or contracting). Whenever any of these changes occur, we need to update our special Foreground brush so the gradient fades out at the right point.

It turns out that we can handle all of these cases just by listening for the SizeChanged event on the TextBlock and of its visual parent. Note that the visual parent is not the one you obtain by calling TextBlock.Parent. That gets you the logical parent (see here to understand the difference). To get the Visual parent you call VisualTreeHelper.GetParent(_textBlock).

Before I hit on the trick of using the Foreground brush, I tried implementing this using OpacityMasks. That worked too, but it was horribly slow once you enabled fade-trimming on any significant number of TextBlocks. But that is only to be expected when you remember how OpacityMasks work. An OpacityMask is a Brush from which Silverlight extracts just the Alpha channel. To display an element with an OpacityMask, Silverlight first has to render everything behind the element, then it has to render the element (and each of its children), then finally it has to composite the two pixel-by-pixel, giving each pixel of the element the transparency determined by the corresponding pixel in the OpacityMask brush. And none of that work is hardware accelerated. OpacityMasks are great, but only when used sparingly.

Give it a go. I’d love to hear how it works out for you.

Published at

Originally posted at

Comments (8)

Fixing memory leaks in RavenDB Management Studio–BindableCollection

Continuing on my last blog post in this series, which I talked about the WeakReference, that time I’ll talk about the BindableCollection.

One the key goals of the new RavenDB Management Studio was to never show an old data in the studio that it’s already obsolete by the server. Say a documents was deleted/updated in the server, you want to see this change in the Management Studio immediately (or at least a few seconds later). We came up with an interesting solution for that. Each model has TimerTickedAsync method which is called both by some update event like delete a document on the studio or by a timer of 5 seconds in order to fetch changed occurred on the server. Now we need to merge the new data with the old data, remove the expired items and render the new once, but we better not render the staff that haven't been changed on a timer basis. So this is the code that we came with in order to do the range:

public class BindableCollection<T> : ObservableCollection<T> where T : class
{
    private readonly Func<T, object> primaryKeyExtractor;
    private readonly KeysComparer<T> objectComparer;

    public BindableCollection(Func<T, object> primaryKeyExtractor, KeysComparer<T> objectComparer = null)
    {
        if (objectComparer == null)
            objectComparer = new KeysComparer<T>(primaryKeyExtractor);
        else
            objectComparer.Add(primaryKeyExtractor);
        
        this.primaryKeyExtractor = primaryKeyExtractor;
        this.objectComparer = objectComparer;
    }

    public void Match(ICollection<T> items, Action afterUpdate = null)
    {
        Execute.OnTheUI(() =>
        {
            var toAdd = items.Except(this, objectComparer).ToList();
            var toRemove = this.Except(items, objectComparer).ToArray();

            for (int i = 0; i < toRemove.Length; i++)
            {
                var remove = toRemove[i];
                var add = toAdd.FirstOrDefault(x => Equals(ExtractKey(x), ExtractKey(remove)));
                if (add == null)
                {
                    Remove(remove);
                    continue;
                }
                SetItem(Items.IndexOf(remove), add);
                toAdd.Remove(add);
            }
            foreach (var add in toAdd)
            {
                Add(add);
            }

            if (afterUpdate != null) afterUpdate();
        });

        if (afterUpdate != null) afterUpdate();
    }
    ...
}

Note that the match method is effectively remove/add just what that is needed.

Later on, we started use the Reactive Extensions’ Observable pattern in order to register to some events, and this made some of our main models to be a disposable one.

So now, we had lots of models that was created by the TimerTickedAsync  but never got used in any manner. Those object was held in memory since they were registered to an events outside of the class but never got disposed (dispose unregister those events).

So this fixed it:

public void Match(ICollection<T> items, Action afterUpdate = null)
{
    Execute.OnTheUI(() =>
    {
        var toAdd = items.Except(this, objectComparer).ToList();
        var toRemove = this.Except(items, objectComparer).ToArray();
        var toDispose = items.Except(toAdd, objectComparer).OfType<IDisposable>().ToArray();

        for (int i = 0; i < toRemove.Length; i++)
        {
            var remove = toRemove[i];
            var add = toAdd.FirstOrDefault(x => Equals(ExtractKey(x), ExtractKey(remove)));
            if (add == null)
            {
                Remove(remove);
                continue;
            }
            SetItem(Items.IndexOf(remove), add);
            toAdd.Remove(add);
        }
        foreach (var add in toAdd)
        {
            Insert(0, add);
        }
        foreach (var disposable in toDispose)
        {
            disposable.Dispose();
        }

        if (afterUpdate != null) afterUpdate();
    });
}

Now we dispose all the objects that was created on the fly and not needed anymore.

Published at

Originally posted at

Fixing memory leaks in RavenDB Management Studio - FluidMoveBehavior

Continuing on my last blog post in this series, which I talked about the WeakReference, that time I’ll talk about the FluidMoveBehavior.

The FluidMoveBehavior gives you a great transition effect to the items in your WrapPanel, which is in the Silverlight toolkit. The FluidMoveBehavior is part of the Expression Blend and it’s exists in the microsoft.expression.interactions.dll.

When I profiled the application with a memory profiler, I have some memory leaks that caused by the FluidMoveBehavior. Surprised I Googled the following “FluidMoveBehavior memory leak” and the first result was this thread, which effectively showed that this is a known issue with no fix yet.

So removing the FluidMoveBehavior from the Management Studio fixes a big source of memory leak. What’s interesting, that the visual effect itself of the FluidMoveBehaviour barley was needed, since we already populating the panel with items each time the panel size is changed.

Published at

Originally posted at

Fixing memory leaks in RavenDB Management Studio - WeakReference

Continue from the last blog post in this series, which I talked about the WeakEventListener, now I’m going to talk about using the WeakReference.

In the RavenDB Management Studio we have 4 pages that contains lots of data: the Home page, Collections page, Documents page and Indexes page. Once you enter to one of those pages, we’ll fetch the data from the RavenDB database but in order to avoid fetching it each time we navigate to that page the data is stored in a static variable. This way, if you re-navigate to a page, you will see the database immediately while we making a background request to RavenDB in order to give you more updated data.

You can look on this code for example:

public class HomeModel : ViewModel
{
    public static Observable<DocumentsModel> RecentDocuments { get; private set; }

    static HomeModel()
    {
        RecentDocuments = new Observable<DocumentsModel>
                          {
                            Value = new DocumentsModel
                                    {
                                        Header = "Recent Documents",
                                        Pager = {PageSize = 15},
                                    }
                          };
    }

    public HomeModel()
    {
        ModelUrl = "/home";
        RecentDocuments.Value.Pager.SetTotalResults(new Observable<long?>(ApplicationModel.Database.Value.Statistics, v => ((DatabaseStatistics)v).CountOfDocuments));
        ShowCreateSampleData = new Observable<bool>(RecentDocuments.Value.Pager.TotalResults, ShouldShowCreateSampleData);
    }

    public override Task TimerTickedAsync()
    {
        return RecentDocuments.Value.TimerTickedAsync();
    }
}

The problem is, what happening when the application consumes to much memory because of all of this static data? In that case there likely to be a performance issues. In order to avoid that, we used make static data to be a WeakReference type, so we basically say to the Silverlight GC engine: If you want to GC the data, please do so. And in this case we’ll just re-initialize it when we need the data again.

This had an huge impact of the memory consumption of the Management Studio application, but we still had some memory leaks which I’ll talk about in the next blog post.

Published at

Originally posted at

Why make WeakEventListener internal?

In the previous post I described how I used the WeakEventListener from the Silverlight Toolkit in order to solve a memory leak. This class is a very needed tool, and it’s something that is recommended to use in Silverlight applications, in order to avoid memory leaks.

Since this class is internal, you must copy the code from the source (thanks to the Microsoft Public License) to your Silverlight application. I’m not sure if this is still the case in the last version of the toolkit, which is compatible with Silverlight 5 (the source code for this is not public when I wrote this), but in any case, if you’re developing a Silverlight application, I’m pretty sure that you’ll need this class.

Tags:

Published at

Originally posted at

Fixing memory leaks in RavenDB Management Studio - WeakEventListener

After shipping the new version of the Management Studio for RavenDB, which was part of build #573, we got reports from our users that it have some memory leaks. This report indicated that we have an huge memory leak in the management studio. I started to investigate this and found a bunch problems with cause it. In this blog posts series I’ll share with you what it took to fix it.

RavenDB Management Studio is a Silverlight based application. One of the mistakes that can be done easily in a Silverlight application (as many other platforms for UI applications) is to attach an event to an object, than discard that object. The problem is that the object will never be cleaned up from the memory, since we have a reference for it – the event listener.

Consider the following code for example:

public static class ModelAttacher
{
    public static readonly DependencyProperty AttachObservableModelProperty =
        DependencyProperty.RegisterAttached("AttachObservableModel", typeof(string), typeof(ModelAttacher), new PropertyMetadata(null, AttachObservableModelCallback));
    
    private static void AttachObservableModelCallback(DependencyObject source, DependencyPropertyChangedEventArgs args)
    {
        var typeName = args.NewValue as string;
        var view = source as FrameworkElement;
        if (typeName == null || view == null)
            return;

        var modelType = Type.GetType("Raven.Studio.Models." + typeName) ?? Type.GetType(typeName);
        if (modelType == null)
            return;

        try
        {
            var modelInstance = Activator.CreateInstance(modelType);
            var observableType = typeof(Observable<>).MakeGenericType(modelType);
            var observable = Activator.CreateInstance(observableType) as IObservable;
            var piValue = observableType.GetProperty("Value");
            piValue.SetValue(observable, modelInstance, null);
            view.DataContext = observable;

            var model = modelInstance as Model;
            if (model == null) 
                return;
            model.ForceTimerTicked();

            SetPageTitle(modelType, modelInstance, view);
            
            view.Loaded += ViewOnLoaded;
        }
        catch (Exception ex)
        {
            throw new InvalidOperationException(string.Format("Cannot create instance of model type: {0}", modelType), ex);
        }
    }
    
    private static void ViewOnLoaded(object sender, RoutedEventArgs routedEventArgs)
    {
        var view = (FrameworkElement)sender;
        var observable = view.DataContext as IObservable;
        if (observable == null)
            return;
        var model = (Model)observable.Value;
        model.ForceTimerTicked();

        var viewModel = model as ViewModel;
        if (viewModel == null) return;
        viewModel.LoadModel(UrlUtil.Url);
    }

For information about the ModelAttacher pattern, take a look on this blog post.

What it means basically is that we creating a models for each page, but never dispose it. So basically each time you navigate to a page, a new view model is created for the page but the old one never got cleaned up.

There was more examples like that, where we have an event keeping a reference to a dead objects. You can look on the RavenDB commits log if you interested in the details. But what is the way to solve this?

In order to solve this I copy the WeakEventListener from the Silverlight Toolkit, which is internal class. Using the WeakEventListener in order to attach to objects solved the above memory issue since we don’t have a strong reference to the dead object anymore, and the GC can just clean them up.

Stress testing RavenDB

The following is cross posted from Mark Rodseth’s blog (he also posted a follow up post with more details).

Mark is a .Net Technical Architect at a digital agency named Fortune Cookie in London. I would like to take the opportunity and thank Mark both for the grand experiment about which you are about to read and for the permission to post this in the company blog.

Please note: Mark or Fortune Cookie are not affiliated with either Hibernating Rhinos or RavenDB in any way.

When a colleague mentioned  RavenDB  to me I had a poke around and discovered that it was one of the more popular open source NoSQL technologies on the market. Not only that but it was bundled with Lucene.Net Search making it Document Database coupled with Lucene search capabilities.  With an interest in NoSQL technology and a grudge match that hadn’t been settled with Lucene.Net, I set myself the challenge to swap out our SQL Search implementation with RavenDB and then do a like for like load test against the two search technologies.
These are my findings from both a programmatic and performance perspective.


Installing RavenDB
There isnt much to installing Raven and its pretty much a case of downloading the latest build and running the Server application.
The server comes with a nice Silverlight management interface which allows you to manage all aspects of Raven Db from databases to data to indexes. All tasks have a programmatic equivalent but a decent GUI is an essential tool for noobs like myself.

Storing the Data
My first development task was to write an import routine which parsed the property data in SQL and then add it into a Raven Database. This was fairly easy and all I needed to do was to create a POCO, plug it with data from SQL and save it using the C# Raven API. The POCO serialised into JSON data and saved as a new document in the  RavenDB.

The main challenge here was changing my thinking from relational modelling to domain driven modelling - a paradigm shift required when moving to NoSQL - which includes concepts like aggregate roots, entities and value types. Journeying into this did get a bit metaphysical at times but here is my understanding of this new fangled schism.

Entity - An entity is something that has a unique identity and meaning in both the business and system context. In the property web site example, a flat or a bungalow or an office match these criteria.

Value Type - Part of the entity which does not require its own identity and has no domain or system relevance on its own. For example, a bedroom or a toilet.

Aggregate Root - Is an master entity with special rules and access permissions that relate to a grouping of similar entities. For example, a property is an aggregate of flats, bungalows and offices. This is the best description of these terms I found.

Hibernating Rhinos note: With RavenDB, we usually consider the Entity and Aggregate Root to be synonyms to a Document. There isn’t a distinction in RavenDB between the two, and they map to a RavenDB document.

In this example, I created one Aggregate Root Entity to store all property types.

C# Property POCO

Indexing the Data
Once the Data was stored it needed to be indexed for fast search. To achieve this I had to get to grips with map reduce functions which I had seen around but avoided like the sad and lonely looking bloke** at a FUN party.
The documentation is pretty spartan on the  RavenDB web site but after hacking away I finally created an index that worked on documents with nested types and allowed for spatial queries.
RavenDB allows you to create indexes using Map Reduce functions in LINQ. What this allows you to do is create a Lucene index from a large, tree like structure of data. Map reduce functions give you the same capability as SQL using joins and group by statements. To create a spatial index which allowed me to search properties by type and sector (nested value types) I created an index using the following Map Reduce function.

Index created using the Raven DB Admin GUI

Hibernating Rhinos note: a more efficient index would likely be something like:

from r in docs.AssetDetailPocos
select new
{
  sectorname = r.Sectors,
  prnlname = r.AddressPnrls,

  r.AssetId,
  r.AskingPrice,
  r.NumberOfBedrooms,
  r.NumberOfBathRooms,
  
  
  _ = SpatialIndex.Generate(r.AssetLatitude, r.AssetLongitude)
}

This would reduce the number of index entries and make the index smaller and faster to generate.

Querying the data

Now that I had data that was indexed, the final development challenge was querying it. RavenDB has a basic search API and a Lucene Query API for more complex queries. Both allow you to write queries in LINQ. To create the kind if complex queries you would require in a property searching web site, the API was a bit lacking. To work around this I had to construct my own native Lucene queries. Fortunately the API allowed me to do so.

Performance Testing

All the pawns were now in place for my load test.

  • The entire property SQL database was mirrored to  RavenDB.
  • The Search Interface now had both a SQL and a  RavenDB implementation.
  • I created a crude Web Page which allowed switching the search from SQL to  RavenDB via query string parameters and output the results using paging.To ensure maximum thrashing the load tests passed in random geo locations for proximity search and keywords for attribute search. 
  • A VM was setup and ready to face the wrath of BrowserMob.

I created a ramp test scaling from 0 to 1000 concurrent users firing a single get request with no think time at the Web Page and ran it in isolation against the SQL Implementation and then in isolation against the  RavenDB Implementation. The test ran for 30 minutes.
And for those of you on the edge of you seat the results where a resounding victory for  RavenDB. Some details of the load test are below but the headline is SQL choked at 250 concurrent users whereas with  RavenDB even with 1000 concurrent users the response time was below 12 seconds.

SQL Load Test

Transactions: 111,014 (Transaction = Single Get Request)
Failures: 110,286 (Any 500 or timeout)

SQL Data Throughput - Flatlines at around 250 concurrent users.

RavenDB Load Test

Transactions: 145,554 (Transaction = Single Get Request)
Failures: 0 (Any 500 or timeout)

RavenDB Data Throughput - What the graph should look like

Final thoughts

RavenDB is a great document database with fairly powerful search capabilities. It has a lot of pluses and a few negative which are listed for you viewing pleasure below.
Positives

  • The documentation although spartan does cover the fundamentals making it easy to get started. On some instances I did have to sniff through the source code to fathom how some things worked but that is the beauty of open source I guess. 
  • The Silverlight Admin interface is pretty sweet 
  • The Raven community (a google group) is very active and the couple of queries I posted were responded to almost immediately.
  • Although the API did present some challenges it both allowed you to bypass its limitations and even contribute yourself to the project.
  • The commercial licence for  RavenDB is pretty cheap at a $600 once off payment

Negatives

  • The web site documentation and content could do with an a facelift. (Saying that, I just checked the web site and it seems to have been be revamped)
  • I came a cross a bug in the Lucene.Net related to granular spatial queries which has yet to be resolved.   Not  RavenDB's fault but a dependence on third party libraries can cause issues. 
  • I struggled to find really impressive commercial reference sites. There are some testimonials but they give little information away. 
  • Sharding scares me.

I look forward to following the progress of  RavenDB and hopefully one day using it in a commercial project. I'm not at the comfort level yet for proposing it but with some more investigation and perhaps some good reference sites this could change very quickly.


* Starry Eyed groupies sadly didn't exist, nor have they ever.
** Not me.

http://ravendb.net

Tags:

Published at

Originally posted at

Invalid JSON results via Powershell 3

This is a guest post by Chris, a RavenDB user, originally posted on the RavenDB group.

Background

I’ve been fiddling with document databases for a while trying to find a way to aggregate our logs without having to demoralize everything. I started with CouchDB and because of the odd status of the project with the founder abandoning it and since I work in a Windows world, going with Raven ultimately seemed like a wholly better solution – all the goods of Couch plus a .Net API.

I built a data model in C# and used a CSV library to push a few hundred thousand records into Raven. The goal is to allow support engineers to query this log information with some simple PowerShell one-liners.

I thought parsing 100megs of CSV and jamming in the thousands of corresponding documents would be the difficult part to figure out and the one-liners would be a couple hours of work at most.

The Problem

Yes, it was a bit challenging to build an efficient importer, but most of that was due to my limited knowledge of C#. I thought it was still not working when I started to build my Posh one-liners and some queries were returning invalid JSON. The PowerShell 3 CTP introduce some new cmdlets for handling JSON and one specifically for executing HTTP Rest API calls – Invoke-RestMethod (alias irm). IRM is 100 times more user friendly than CURL and it even goes a step above and beyond by automatically deserializing the content returned. So, it’s extremely easy to consume JSON via a REST API, but I kept running into an issue where Posh would throw an error that there was no Results (the deserialized JSON) property on the object. I must have been pushing in bad data – garbage in equals garbage out.

I thought the HTML in some of the log values was invalidating the JSON string. I deleted all the documents, added a call to HTTPEncode on all strings before Storing it to Raven, then I reloaded the records. I ended up with the same results; or, lack of Results property in this case.

The Invoke-RestMethod cmdlet did return what looked like a valid JSON string, so I dropped it into Notepad++ and turned on JavaScript syntax highlighting and went through the string. I found the culprit.

"SkippedResults:0,

There was a missing quote around a key name. I almost posted this as a RavenDB bug, but I checked the raw HTTP response in Fiddler and found that the quote existed.

What do to about it

I am working with a Technology Preview release of Powershell 3, so I checked Connect and found a similar report of missing characters in content deserialization.

“It seems that the last byte of the second HTTP response packet is being dropped by powershell. This only seems to happen when the second response packet is smaller than the first.”

Microsoft closed this issue as “Won’t Fix” and suggested using Invoke-WebRequest instead. Invoke-WebRequest removes all the magic of Invoke-RestMethod and just gets raw HTTP content. I piped the Content parameter to the, also new to Powershell 3, cmdlet ConvertFrom-Json and got proper deserializedJSON.

$c = Invoke-WebRequest "http://localhost:8080/indexes/vovici/Logs/process?query=customer:blah"
$r = $c.content | convertfrom-json

So, it’s an extra step, but if you want to reliably access RavenDB’s HTTP API via PowerShell 3 that seems to be the way to do it.

UPDATE: It does look like MS fixed the lost character issue in the Invoke-RestMethod cmdlet in the latest beta release. I can pull down 1024 documents with one web request and get a usable, deserialized Object[].

Published at

Originally posted at

Model Attacher pattern in Silverlight applications

A while ago, when we started to develop our next version of RavenDB Studio, one of our goals was to make its code as simple as possible. That way, we ensure that it is easy to understand what is going on, so making changes to the Studio should be a trivial task.

In order to achieve that, we decided to not use any MVVM toolkits, but simply use a simple pages (views) and attach a model to them. In this approach, every view (page) know how to resolve its view-model by itself. This makes the Silverlight code much more simple, since it let’s you open a specific view by just navigating to its relative URL.

In order to make this possible we have a ModelAttacher.AttachModel attached property on the page, which takes care of instantiating the view-model and attach it to the page’s DataContext property.

Take a look on the following tipical view (page) in order to see it in action:

<Infrastructure:View x:Class="Raven.Studio.Views.Home"
                     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
                     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
                     xmlns:Infrastructure="clr-namespace:Raven.Studio.Infrastructure"
                     Title="Home"
                     Style="{StaticResource PageStyle}"
                     Infrastructure:ModelAttacher.AttachModel="HomeModel">

</Infrastructure:View>

In this example, we have an empty Home view, which is make a use of a HomeModel. The ModelAttacher’s job here is to create an instance of the HomeModel class and attach it to the View.DataContext property. (The view is a simple class that derives from Page.)

This is how ModelAttacher works, in order to achieve this:

namespace Raven.Studio.Infrastructure
{
    public static class ModelAttacher
    {
        public static readonly DependencyProperty AttachModelProperty =
            DependencyProperty.RegisterAttached("AttachModel", typeof(string), typeof(ModelAttacher), new PropertyMetadata(null, AttachModelCallback));
        
        private static void AttachModelCallback(DependencyObject source, DependencyPropertyChangedEventArgs args)
        {
            var typeName = args.NewValue as string;
            var view = source as FrameworkElement;
            if (typeName == null || view == null)
                return;

            var modelType = Type.GetType("Raven.Studio.Models." + typeName) ?? Type.GetType(typeName);
            if (modelType == null)
                return;

            try
            {
                var model = Activator.CreateInstance(modelType);
                view.DataContext = model;
            }
            catch (Exception ex)
            {
                throw new InvalidOperationException(string.Format("Cannot create instance of model type: {0}", modelType), ex);
            }
        }

        public static string GetAttachModel(UIElement element)
        {
            return (string)element.GetValue(AttachModelProperty);
        }

        public static void SetAttachModel(UIElement element, string value)
        {
            element.SetValue(AttachModelProperty, value);
        }
    }
}

Now in order to attach a model to its view, we need to just add the attached property to the view element:  Infrastructure:ModelAttacher.AttachModel="HomeModel".

Please note that in this case any view-model has to have a default (parameters-less) constructor. In order to solve that, what we have done in RavenDB Studio is to have a ModelBase class for our view-models which make sure to expose all our common model dependencies.

XAML Magic: Turning an Ugly Silverlight Duckling into a Beautiful Photoshopped Swan

This is a guest post by Samuel Jack, who had done a lot of work on the new UI for RavenDB.

Three weeks ago Ayende put out a request for help in turning an ugly duckling into a beautiful swan, and I, rather nervously, signed up for the job. The ugly duckling in question was Raven Studio, the Silverlight-based management UI for Raven Db. The nerves were a result of doubting that my limited graphic design skills were up to the job. But when Ayende assured me that he had a proto-type swan in the form of a Photoshop design, drawn up by a bona-fide, turtle-neck-wearing designer, they were calmed. Marginally.

Because the design Ayende had was for the new-look RavenDb website.

image

He wanted me to take the look and feel and transfer it to the Silverlight Raven Studio application. Which, when he handed it over to me, looked like this:

image

Ahh! Where to start?

Photoshop for Developer Dummies

To ease myself in, I got started by simply trying to imitate parts of the Photoshop design in XAML, beginning with the header bar at the very top of the page. Not being a designer myself, I’m rather like a duck out of water when it comes to Photoshop, but I’ve at least got the basics sussed.

The thing to understand is that designers construct Photoshop images like onions, layer upon layer, sometimes eye-watering in complexity, and to reproduce the design, you have to peel down through the layers.

Photoshop Layer BasicsFirst go to Photoshop’s Layers pane, and make sure all the layers are unlocked. This allows the Move Layer tool to come into play, not to move layers, but identify layers by selecting them in the Layers pane when you click the corresponding part of the image. Once you’ve identified a layer, Alt-Click it, and all other layers in the image will be hidden, allowing you to figure out exactly how the thing should look.

Mostly when I’m paring down Photoshop layers I’m looking to isolate them so that I can figure out the colour gradients they use. You could, of course, navigate your way through Photoshop’s dialogs to read off the exact RGB values. Or, if you can get the layer on its lonesome, you can use Expression Blend’s Gradient Dropper tool. GradientEyeDropper

This is a brilliant little timesaver. In the Blend Property pane, select the Brush property of your choice, put it into Gradient mode, click the Gradient Dropper tool, then drag over any area on screen, and Blend will reproduce the gradient under the cursor in XAML form.

After all that, I have the first feather of the new swan: a header bar matching the Photoshop design. Well, the background of the header bar. It needs fleshing out with some buttons.

Let it Live: Control Templates and Visual States

Silverlight, following WPF, has the concept of look-less controls. That is, the Controls (take Button as an example) manage their behaviour (Buttons respond to mouse clicks by executing commands) but don’t define how they are rendered on screen. That is left to the control’s Style, and specifically its ControlTemplate. The ControlTemplate defines the visual tree of all the UI elements needed to present the control and make it look snazzy. With a little patience, some assistance from Expression Blend, and plenty of application of the Gradient Dropper tool, it’s possible to take the built-in controls and make them look and feel just how the designer ordered.

I wanted Buttons that look like those in the header bar of the Photoshop design, but when the corresponding page is selected, they should change to have a background gradient with colours like the RavenDb logo.

When restyling a Control, it’s best to start by modifying the existing style. This way you can be sure you won’t miss an aspect of the control’s behaviour that you might otherwise forget. Blend makes this easy by giving you the option of editing a copy of the Control’s current ControlTemplate (right-click on it in the Objects and Timeline View, then select Edit Template > Edit a Copy). There are occasions when that little trick has failed, and I’ve ended up with an empty control template. But MSDN has come through for me then: it has a whole section containing the default Styles and Control Templates for all the built-in controls, like this one for Button.

Part of the ControlTemplate defines how the control looks in its various states, when the mouse is over it, when it has focus, or when it is selected, for example. The Control itself is responsible for indicating when it has entered each state. As a designer, it’s your job to specify Storyboards that are activated each time particular states are entered. Each Storyboard can animate whichever properties it likes to achieve the desired effect – in my Buttons, for example, I animate the opacity property of a Border element to fade in a coloured background indicating that it is selected. All this is overseen by the VisualStateManager, of which, more here. Naturally, Expression Blend has great editing support for visual states. Read John Papa’s tutorial to learn more.

So now I have a header bar with buttons that change colour when the corresponding page is selected. Where next?

Textured Backgrounds

Well, that page background could do with spicing up. The Photoshop design has a nice textured background, which I extracted to a PNG file that Silverlight would understand by hiding every layer except the background, then using Photoshop’s “Save for Web & Devices” feature.

The thing about textured backgrounds is that you do want them to cover the whole of the page background, which means tiling the texture to fill all the space. WPF makes this easy with its ImageBrush, which has a TileMode property, which, when set to a value other than None, automatically repeats the image over the whole area to be painted by the brush. Silverlight has ImageBrushes, but they don’t support tiling out of the box. Fortunately, Phil Middlemiss has supplied what is lacking in the form of the TiledBGControl which does exactly what I need – you should take a look: it makes clever use of Silverlight’s pixel shader effects.

This is what we’ve got so far.

image

The Index Editor Page, Before and After

Here are a couple of other pages I’ve beautified. First, the Index Editor page, as it was before:

image

And now:

image

Again, it was a challenge knowing where to apply my beautician’s brush first. I settled on adding the header bar at the top, and I then realised it could double up as a toolbar. Originally the page had no header at all, but by having a bread-crumb bar in the header it helps to give the user a bit more context when they’re looking at the page, as well as making it easier to navigate around.

Inspiration and Icons

Since my graphic-design skills are so underdeveloped, I borrow ideas shamelessly wherever I find something that fits. You may recognise the styling of the toolbar buttons in the Index page header bar as being remarkably similar to the ones on the Google Chrome toolbar. Yes – Expression Blend’s Gradient dropper does work on live applications too! Two places to check out if you find yourself short on inspiration are Quince and Pattern Tab which both catalogue examples of user interface and user experience design from across the web. Pattern Tab especially has myriad examples of beautiful UIs.

In the past I’ve struggled to find icons for my projects, but I’ve recently discovered two great sources: IconFinder.com and IconArchive.com. Both have excellent search facilities (which is often what’s missing from the commercial collections you buy and download to your machine in a whacking great zip file), and are careful to call out the license attached to each icon. A surprisingly large number are licensed so that they can be used without charge in commercial products.

A XAML Tip

The nice thing about styling an application is that it gets easier with every page you complete. Once you’ve settled on a look for a particular kind of element, you can repeat that look on every page. Silverlight’s support for Styles and Resources makes this very easy. And I have a tip that can make it easier still.

I put all my styles into a single resource dictionary, Styles.xaml which I merge into my App.xaml resource dictionary. I then name all my Styles, Brushes, etc. using a hierarchical naming convention. So Styles all begin with “Style_”, Styles for Buttons would all begin “Style_Button”, and then would come the styles for different purposes: “Style_Button_Toolbar”, “Style_Button_Hero” (for those big red buttons in your app that the hero uses to save the world), etc.. The pay-off for using this convention comes when you’re hand-editing XAML and making use of Resharper’s XAML intellisense. Type “{StaticResource Style_[ControlType]” and Resharper instantly presents you with a list of all the styles that might apply.

A Parting Screenshot

To finish, here’s one more before and after comparison, this time of the Edit Document page. Before:

Edit Document Page - Before

And after:

image

You can begin to sense the benefit of using a consistent set of styles, as it brings a harmonious feel to the whole application.

I hope you’ve enjoyed this whistle-stop tour of the Raven Studio beautification process. Remember that all the code is available on GitHub. We’d love to hear what you think.

Tags:

Published at

Originally posted at

XAML Magic: Turning an Ugly Silverlight Duckling into a Beautiful Photoshopped Swan

This is a guest post by Samuel Jack, who had done a lot of work on the new UI for RavenDB.

Three weeks ago Ayende put out a request for help in turning an ugly duckling into a beautiful swan, and I, rather nervously, signed up for the job. The ugly duckling in question was Raven Studio, the Silverlight-based management UI for Raven Db. The nerves were a result of doubting that my limited graphic design skills were up to the job. But when Ayende assured me that he had a proto-type swan in the form of a Photoshop design, drawn up by a bona-fide, turtle-neck-wearing designer, they were calmed. Marginally.

Because the design Ayende had was for the new-look RavenDb website.

image

He wanted me to take the look and feel and transfer it to the Silverlight Raven Studio application. Which, when he handed it over to me, looked like this:

image

Ahh! Where to start?

Photoshop for Developer Dummies

To ease myself in, I got started by simply trying to imitate parts of the Photoshop design in XAML, beginning with the header bar at the very top of the page. Not being a designer myself, I’m rather like a duck out of water when it comes to Photoshop, but I’ve at least got the basics sussed.

The thing to understand is that designers construct Photoshop images like onions, layer upon layer, sometimes eye-watering in complexity, and to reproduce the design, you have to peel down through the layers.

Photoshop Layer BasicsFirst go to Photoshop’s Layers pane, and make sure all the layers are unlocked. This allows the Move Layer tool to come into play, not to move layers, but identify layers by selecting them in the Layers pane when you click the corresponding part of the image. Once you’ve identified a layer, Alt-Click it, and all other layers in the image will be hidden, allowing you to figure out exactly how the thing should look.

Mostly when I’m paring down Photoshop layers I’m looking to isolate them so that I can figure out the colour gradients they use. You could, of course, navigate your way through Photoshop’s dialogs to read off the exact RGB values. Or, if you can get the layer on its lonesome, you can use Expression Blend’s Gradient Dropper tool. GradientEyeDropper

This is a brilliant little timesaver. In the Blend Property pane, select the Brush property of your choice, put it into Gradient mode, click the Gradient Dropper tool, then drag over any area on screen, and Blend will reproduce the gradient under the cursor in XAML form.

After all that, I have the first feather of the new swan: a header bar matching the Photoshop design. Well, the background of the header bar. It needs fleshing out with some buttons.

Let it Live: Control Templates and Visual States

Silverlight, following WPF, has the concept of look-less controls. That is, the Controls (take Button as an example) manage their behaviour (Buttons respond to mouse clicks by executing commands) but don’t define how they are rendered on screen. That is left to the control’s Style, and specifically its ControlTemplate. The ControlTemplate defines the visual tree of all the UI elements needed to present the control and make it look snazzy. With a little patience, some assistance from Expression Blend, and plenty of application of the Gradient Dropper tool, it’s possible to take the built-in controls and make them look and feel just how the designer ordered.

I wanted Buttons that look like those in the header bar of the Photoshop design, but when the corresponding page is selected, they should change to have a background gradient with colours like the RavenDb logo.

When restyling a Control, it’s best to start by modifying the existing style. This way you can be sure you won’t miss an aspect of the control’s behaviour that you might otherwise forget. Blend makes this easy by giving you the option of editing a copy of the Control’s current ControlTemplate (right-click on it in the Objects and Timeline View, then select Edit Template > Edit a Copy). There are occasions when that little trick has failed, and I’ve ended up with an empty control template. But MSDN has come through for me then: it has a whole section containing the default Styles and Control Templates for all the built-in controls, like this one for Button.

Part of the ControlTemplate defines how the control looks in its various states, when the mouse is over it, when it has focus, or when it is selected, for example. The Control itself is responsible for indicating when it has entered each state. As a designer, it’s your job to specify Storyboards that are activated each time particular states are entered. Each Storyboard can animate whichever properties it likes to achieve the desired effect – in my Buttons, for example, I animate the opacity property of a Border element to fade in a coloured background indicating that it is selected. All this is overseen by the VisualStateManager, of which, more here. Naturally, Expression Blend has great editing support for visual states. Read John Papa’s tutorial to learn more.

So now I have a header bar with buttons that change colour when the corresponding page is selected. Where next?

Textured Backgrounds

Well, that page background could do with spicing up. The Photoshop design has a nice textured background, which I extracted to a PNG file that Silverlight would understand by hiding every layer except the background, then using Photoshop’s “Save for Web & Devices” feature.

The thing about textured backgrounds is that you do want them to cover the whole of the page background, which means tiling the texture to fill all the space. WPF makes this easy with its ImageBrush, which has a TileMode property, which, when set to a value other than None, automatically repeats the image over the whole area to be painted by the brush. Silverlight has ImageBrushes, but they don’t support tiling out of the box. Fortunately, Phil Middlemiss has supplied what is lacking in the form of the TiledBGControl which does exactly what I need – you should take a look: it makes clever use of Silverlight’s pixel shader effects.

This is what we’ve got so far.

image

The Index Editor Page, Before and After

Here are a couple of other pages I’ve beautified. First, the Index Editor page, as it was before:

image

And now:

image

Again, it was a challenge knowing where to apply my beautician’s brush first. I settled on adding the header bar at the top, and I then realised it could double up as a toolbar. Originally the page had no header at all, but by having a bread-crumb bar in the header it helps to give the user a bit more context when they’re looking at the page, as well as making it easier to navigate around.

Inspiration and Icons

Since my graphic-design skills are so underdeveloped, I borrow ideas shamelessly wherever I find something that fits. You may recognise the styling of the toolbar buttons in the Index page header bar as being remarkably similar to the ones on the Google Chrome toolbar. Yes – Expression Blend’s Gradient dropper does work on live applications too! Two places to check out if you find yourself short on inspiration are Quince and Pattern Tab which both catalogue examples of user interface and user experience design from across the web. Pattern Tab especially has myriad examples of beautiful UIs.

In the past I’ve struggled to find icons for my projects, but I’ve recently discovered two great sources: IconFinder.com and IconArchive.com. Both have excellent search facilities (which is often what’s missing from the commercial collections you buy and download to your machine in a whacking great zip file), and are careful to call out the license attached to each icon. A surprisingly large number are licensed so that they can be used without charge in commercial products.

A XAML Tip

The nice thing about styling an application is that it gets easier with every page you complete. Once you’ve settled on a look for a particular kind of element, you can repeat that look on every page. Silverlight’s support for Styles and Resources makes this very easy. And I have a tip that can make it easier still.

I put all my styles into a single resource dictionary, Styles.xaml which I merge into my App.xaml resource dictionary. I then name all my Styles, Brushes, etc. using a hierarchical naming convention. So Styles all begin with “Style_”, Styles for Buttons would all begin “Style_Button”, and then would come the styles for different purposes: “Style_Button_Toolbar”, “Style_Button_Hero” (for those big red buttons in your app that the hero uses to save the world), etc.. The pay-off for using this convention comes when you’re hand-editing XAML and making use of Resharper’s XAML intellisense. Type “{StaticResource Style_[ControlType]” and Resharper instantly presents you with a list of all the styles that might apply.

A Parting Screenshot

To finish, here’s one more before and after comparison, this time of the Edit Document page. Before:

Edit Document Page - Before

And after:

image

You can begin to sense the benefit of using a consistent set of styles, as it brings a harmonious feel to the whole application.

I hope you’ve enjoyed this whistle-stop tour of the Raven Studio beautification process. Remember that all the code is available on GitHub. We’d love to hear what you think.

Tags:

Published at

Originally posted at

Comments (1)

XAML Magic: Turning an Ugly Silverlight Duckling into a Beautiful Photoshopped Swan

This is a guest post by Samuel Jack, who had done a lot of work on the new UI for RavenDB.

Three weeks ago Ayende put out a request for help in turning an ugly duckling into a beautiful swan, and I, rather nervously, signed up for the job. The ugly duckling in question was Raven Studio, the Silverlight-based management UI for Raven Db. The nerves were a result of doubting that my limited graphic design skills were up to the job. But when Ayende assured me that he had a proto-type swan in the form of a Photoshop design, drawn up by a bona-fide, turtle-neck-wearing designer, they were calmed. Marginally.

Because the design Ayende had was for the new-look RavenDb website.

image

He wanted me to take the look and feel and transfer it to the Silverlight Raven Studio application. Which, when he handed it over to me, looked like this:

image

Ahh! Where to start?

Photoshop for Developer Dummies

To ease myself in, I got started by simply trying to imitate parts of the Photoshop design in XAML, beginning with the header bar at the very top of the page. Not being a designer myself, I’m rather like a duck out of water when it comes to Photoshop, but I’ve at least got the basics sussed.

The thing to understand is that designers construct Photoshop images like onions, layer upon layer, sometimes eye-watering in complexity, and to reproduce the design, you have to peel down through the layers.

Photoshop Layer BasicsFirst go to Photoshop’s Layers pane, and make sure all the layers are unlocked. This allows the Move Layer tool to come into play, not to move layers, but identify layers by selecting them in the Layers pane when you click the corresponding part of the image. Once you’ve identified a layer, Alt-Click it, and all other layers in the image will be hidden, allowing you to figure out exactly how the thing should look.

Mostly when I’m paring down Photoshop layers I’m looking to isolate them so that I can figure out the colour gradients they use. You could, of course, navigate your way through Photoshop’s dialogs to read off the exact RGB values. Or, if you can get the layer on its lonesome, you can use Expression Blend’s Gradient Dropper tool. GradientEyeDropper

This is a brilliant little timesaver. In the Blend Property pane, select the Brush property of your choice, put it into Gradient mode, click the Gradient Dropper tool, then drag over any area on screen, and Blend will reproduce the gradient under the cursor in XAML form.

After all that, I have the first feather of the new swan: a header bar matching the Photoshop design. Well, the background of the header bar. It needs fleshing out with some buttons.

Let it Live: Control Templates and Visual States

Silverlight, following WPF, has the concept of look-less controls. That is, the Controls (take Button as an example) manage their behaviour (Buttons respond to mouse clicks by executing commands) but don’t define how they are rendered on screen. That is left to the control’s Style, and specifically its ControlTemplate. The ControlTemplate defines the visual tree of all the UI elements needed to present the control and make it look snazzy. With a little patience, some assistance from Expression Blend, and plenty of application of the Gradient Dropper tool, it’s possible to take the built-in controls and make them look and feel just how the designer ordered.

I wanted Buttons that look like those in the header bar of the Photoshop design, but when the corresponding page is selected, they should change to have a background gradient with colours like the RavenDb logo.

When restyling a Control, it’s best to start by modifying the existing style. This way you can be sure you won’t miss an aspect of the control’s behaviour that you might otherwise forget. Blend makes this easy by giving you the option of editing a copy of the Control’s current ControlTemplate (right-click on it in the Objects and Timeline View, then select Edit Template > Edit a Copy). There are occasions when that little trick has failed, and I’ve ended up with an empty control template. But MSDN has come through for me then: it has a whole section containing the default Styles and Control Templates for all the built-in controls, like this one for Button.

Part of the ControlTemplate defines how the control looks in its various states, when the mouse is over it, when it has focus, or when it is selected, for example. The Control itself is responsible for indicating when it has entered each state. As a designer, it’s your job to specify Storyboards that are activated each time particular states are entered. Each Storyboard can animate whichever properties it likes to achieve the desired effect – in my Buttons, for example, I animate the opacity property of a Border element to fade in a coloured background indicating that it is selected. All this is overseen by the VisualStateManager, of which, more here. Naturally, Expression Blend has great editing support for visual states. Read John Papa’s tutorial to learn more.

So now I have a header bar with buttons that change colour when the corresponding page is selected. Where next?

Textured Backgrounds

Well, that page background could do with spicing up. The Photoshop design has a nice textured background, which I extracted to a PNG file that Silverlight would understand by hiding every layer except the background, then using Photoshop’s “Save for Web & Devices” feature.

The thing about textured backgrounds is that you do want them to cover the whole of the page background, which means tiling the texture to fill all the space. WPF makes this easy with its ImageBrush, which has a TileMode property, which, when set to a value other than None, automatically repeats the image over the whole area to be painted by the brush. Silverlight has ImageBrushes, but they don’t support tiling out of the box. Fortunately, Phil Middlemiss has supplied what is lacking in the form of the TiledBGControl which does exactly what I need – you should take a look: it makes clever use of Silverlight’s pixel shader effects.

This is what we’ve got so far.

image

The Index Editor Page, Before and After

Here are a couple of other pages I’ve beautified. First, the Index Editor page, as it was before:

image

And now:

image

Again, it was a challenge knowing where to apply my beautician’s brush first. I settled on adding the header bar at the top, and I then realised it could double up as a toolbar. Originally the page had no header at all, but by having a bread-crumb bar in the header it helps to give the user a bit more context when they’re looking at the page, as well as making it easier to navigate around.

Inspiration and Icons

Since my graphic-design skills are so underdeveloped, I borrow ideas shamelessly wherever I find something that fits. You may recognise the styling of the toolbar buttons in the Index page header bar as being remarkably similar to the ones on the Google Chrome toolbar. Yes – Expression Blend’s Gradient dropper does work on live applications too! Two places to check out if you find yourself short on inspiration are Quince and Pattern Tab which both catalogue examples of user interface and user experience design from across the web. Pattern Tab especially has myriad examples of beautiful UIs.

In the past I’ve struggled to find icons for my projects, but I’ve recently discovered two great sources: IconFinder.com and IconArchive.com. Both have excellent search facilities (which is often what’s missing from the commercial collections you buy and download to your machine in a whacking great zip file), and are careful to call out the license attached to each icon. A surprisingly large number are licensed so that they can be used without charge in commercial products.

A XAML Tip

The nice thing about styling an application is that it gets easier with every page you complete. Once you’ve settled on a look for a particular kind of element, you can repeat that look on every page. Silverlight’s support for Styles and Resources makes this very easy. And I have a tip that can make it easier still.

I put all my styles into a single resource dictionary, Styles.xaml which I merge into my App.xaml resource dictionary. I then name all my Styles, Brushes, etc. using a hierarchical naming convention. So Styles all begin with “Style_”, Styles for Buttons would all begin “Style_Button”, and then would come the styles for different purposes: “Style_Button_Toolbar”, “Style_Button_Hero” (for those big red buttons in your app that the hero uses to save the world), etc.. The pay-off for using this convention comes when you’re hand-editing XAML and making use of Resharper’s XAML intellisense. Type “{StaticResource Style_[ControlType]” and Resharper instantly presents you with a list of all the styles that might apply.

A Parting Screenshot

To finish, here’s one more before and after comparison, this time of the Edit Document page. Before:

Edit Document Page - Before

And after:

image

You can begin to sense the benefit of using a consistent set of styles, as it brings a harmonious feel to the whole application.

I hope you’ve enjoyed this whistle-stop tour of the Raven Studio beautification process. Remember that all the code is available on GitHub. We’d love to hear what you think.

Tags:

Published at

Originally posted at

Comments (2)