Tag Archives: Glom

www.glom.org

Trying CMake Again

I’ve recently been playing with CMake again, with much more success than last time. I still don’t love it, but:

I first tried it in the cmake branch of my little PrefixSuffix application because its build is rather simple.

I then finally allowed libsigc++ to have an additional CMake build system, at least in libsigc++-3.0, and Marcin Kolny got it done. This was possible now that libsigc++ doesn’t generate code from .m4 files. We wouldn’t have wanted to maintain two complicated build systems, but two simple build systems seems acceptable. I noticed many questions on StackExchange about actually using libsigc++ in CMake builds systems, so I added a hint about that to the documentation.

I also added a CMake build system to my Glom source code, to test something bigger, with many awkward dependencies such as Python, Boost, and PostgreSQL. It’s working well now, though I haven’t yet added all the tests to the build. This let me really try out CLion.

Minor Conclusions

I’m fairly pleased with the result. CMake doesn’t now feel massively more annoying than autotools, though it makes no attempt to do the dist and distcheck that I really need. I have now achieved a level of acceptance that means I wouldn’t mind much working on a project that uses CMake. I wouldn’t even need to hold my nose. The following complaints don’t seem to bother me quite so much any more:

I still find invoking CMake horribly obscure compared to ./configure. For instance

cmake -DCMAKE_INSTALL_PREFIX:PATH=/opt/something

seems ugly compared to

./configure --prefix=/opt/gnome

and I don’t see an equivalent for “./configure –help” to list all generic and project-specific build options. I also haven’t tried to replace mm-commons‘s wonderful  –enable-warnings=fatal to easily turn on compiler warnings as errors, encouraging my code to be free of warnings and deprecated API while not troubling people who just want to build the project with the least difficulties.

I still cannot understand how it’s normal in the CMake world to create these awful little Find*() scripts for each library that you might depend on, instead of just using pkg-config, though CMake’s pkg-config support seems to work well now. And hoping that your own library’s Find*() script ends up in CMake itself does not seem scalable.

It’s also a little odd that I cannot specify the link directories (-L flags) per target. link_directories() seems to apply to all targets.

CMake’s syntax still seems like a toy language. Its quoting (none/implicit/unknown) and separating (spaces, not commas) and use of whitespace (occasionally significant) sometimes makes m4 feel almost friendly. The non-case-sensitivity of function names (but case-sensitivity of variable names) just leads to arbitrarily inconsistent code and unnecessary style differences between projects.

I don’t like the CMake tradition of having separate CMakeList.txt files in sub-directories, because this can lead to duplication and makes it harder to see the whole build system at once. I prefer non-recursive autotools, so I tried to use just one CMakeList.txt for my CMake projects too, which was only a little awkward.

C++ Core Guidelines: Ownership and Parameters

I’ve been catching up on the wonderful CppCon talks from 2014 and 2015. It looks like we are getting towards a consensus about how to write modern C++, though I guess we are still early in the process,  and still coming up with as many questions as answers.

When this all settles down, it might need Scott Meyers to come out of retirement from C++ to combine and update his Effective C++ books. Maybe he’d do it if a really modern C++ emerged from it all.

Talks, Guidelines, Smartpointers and Ownership

Some of the most important talks focus on when pointers, or smartpointers, have “ownership”. Three talks in particular progress towards a rather utopian vision in which (shared) object ownership is clearer in various parts of your code and in which there are no object lifetime bugs:

  • CppCon 2014: Herb Sutter “Back to the Basics! Essentials of Modern C++ Style”  (slides)
    Herb discussed some best practices, particularly around auto and smartpointers.
  • CppCon 2015: Bjarne Stroustrup: “Writing Good C++14” (slides)
    Bjarne discussed how we can agree on guidelines that static analysis tools can use, and how following the rules can let static analysis tools find more bugs. For example, by using types to indicate whether a (smart)pointer is owning. As in Herb’s 2014 talk, he suggested that we should get() the raw pointer out of a shared_ptr<> before passing it to a function that doesn’t really want to share ownership. He also introduced the GSL, which has owned<> for when you can’t use shared_ptr<> or unique_ptr<>.
  • CppCon 2015: Herb Sutter: “Better C++14 by Default” (slides)
    The next day, Herb discussed many more ways that static analysis tools could avoid several classes of bugs, and how we can break the rules down into sets called profiles. He suggested some annotations, via the C++ generalized attributes syntax, already in C++11, which could let the tools find even more problems, but showed that we can make good assumptions by default. He showed a Microsoft static analysis tool doing this already.

I like the idea of tools telling me what to do. C++ is more complicated than ever, though also better than ever, and there’s more scope than ever for people to disagree about how to use it. At some point I like to move on and get stuff done. I’m sceptical that we can ever really make our code as safe as Bjarne and Herb suggest, but I’m convinced that we could catch several new categories of problems before runtime, and I want to try.

C++ Core Guidelines, Smartpointers, and Ownership

The C++ Core Guidelines are currently a bit of a muddle. There are many rules that seem to overlap too much, often with insubstantial examples. Too often there are admonitions about what not to do rather than examples of how to do things properly. Trying to comply with one rule will sometimes violate another rule, with no clues about resolving the conflict. The github issues and commits show that conflicts are being clarified, but the whole thing seems so big and spread so thinly that I guess it will be a long time until it feels consistent and cohesive.

The advice often seems to assume that you are writing application code rather than reusable library code, though large real world systems are rarely just one or the other. Therefore there is little consideration of ABI stability. For instance, it suggests “F.7: For general use, take T* or T& arguments rather than smart pointers” and “I.11: Never transfer ownership by a raw pointer (T*)” (and more overlapping advice) but what happens when you need to change the implementation and need to share ownership of the passed instance? You can’t just change a T* to a std::shared_ptr<T> without making existing applications crash. The document suggests GSL’s owned<> in this case, though I guess it will still affect shared library symbol names for methods, and it won’t actually do any reference counting. So in a library where you know that instances get passed around a lot, and there’s a reasonable chance that ownership will be shared, it doesn’t seem unreasonable to default to std::shared_ptr<> even if that obscures the actual ownership in some parts of the code.

Furthermore, when looking at other peoples’ code, how do we distinguish the foo(const T*) method that probably doesn’t keep a copy of the pointer, from all the foo(const T*) methods that might do anything at all because their authors never heard of the guidelines?

Raw pointers as non-owners

As a start, I’m playing with this idea of not overusing smartpointers and letting the lack of a smartpointer indicate non-involvement in the ownership. For instance:

std::shared_ptr<Something> something = get_something();

// This takes a shared_ptr:
do_something_and_keep_it(something); 

// This doesn't care about ownership:
do_something_and_dont_keep_it(something.get());

I’m ready to try this out in an application such as Glom, but fearful of doing it in a library API, such as gtkmm.

Bear in mind that, In the past,

  • I have been one of the maintainers that made glibmm and gtkmm use Glib::RefPtr<>, an intrusive reference-counting smartpointer, for all our wrapper classes of reference-counted glib/GTK+ objects (not widgets). For instance, see the doxygen graph of glibmm/giomm classes deriving from Glib::ObjectBase, and there are many in gtkmm too (not the widgets).
  • I converted most of my Glom codebase to use all of its data structure classes by shared_ptr (a custom shared_ptr<> before C++11) rather than trying to decide when they can be deleted.

These have worked out very well, partly because “only use this via this smartpointer” is a nice simple rule to follow. I’ve resisted letting anyone easily get the underlying pointer out of a RefPtr, to preserve this simplicity. Now that there’s this new (at least to me) consensus, I guess we’ll actually add RefPtr::get() and RefPtr::operator*() soon.

But I still hesitate, because I don’t expect the typical developer to be careful enough until there are tools to stop misuse and I don’t expect the typical developer to use the tools even when they exist. So the price of a better experience for experts may be more ways for non-experts to make mistakes. The gtkmm API has tried to be a pure and modern C++ while also being pragmatic and obvious, but it will be hard to keep the balance in this case.

Trying out raw pointers as non-owners in Glom

I tried this out in an “ownership” branch of Glom. I first looked at some static (standalone) functions that took a std::shared_ptr<> because those were unlikely to store any state between calls. Here is the fairly small patch that changed those parameters.

I then looked at some other functions that took a shared pointer to const (std::shared_ptr<const Something>), suggesting that the implementation just wanted to look at it and then forget it. Changing one function’s parameter forces you to look at the calling functions, where you might find that the shared_ptr<> is itself a parameter that should be changed to a non-owning raw pointer. I followed this logic up the call stack for a while and ended up with a larger patch. I forget which function I tried to change first.

I like that this reduces the amount of code where I need to worry about circular references, which I would break with a std::weak_ptr<>. But I still worry enough to wish there was some way to automatically identify where these might happen.

I’m still a little uncomfortable with this mix of shared_ptr<> and raw pointers, even in purely application code, so I’m not ready to put this in git master just yet. But I think I’ll come around to the idea.

std::unique_ptr parameters to take ownership

The C++ Core Guidelines, suggest “R.32: Take a unique_ptr<widget> parameter to express that a function assumes ownership of a widget” though they currently tell you to avoid std::move() (update: fixed now) which you’d need when calling such a method with a named variable. They don’t mean  gtkmm Widget, but I do.

Watch Herb Sutter’s 2014 talk for the rationale for taking std::unique_ptr<>, but not all types, by value. It’s also mentioned in the C++ Core Guidelines “F.15: Prefer simple and conventional ways of passing information” section.

For instance:

void
SomeClass::take_something(std::unique_ptr<Something> something) {
...
}

...
auto something =
  std::make_unique<Something>("something", 2, 'a');
something->set_extra_foos(foo1, foo2);
bar.take_something(std::move(something));

// Or without the intermediate variable:
bar.take_something(
  std::make_unique<Something>("something", 2, 'a'));

// Or again without the intermediate variable:
bar.take_something(
  create_something("something", 2, 'a'));

Trying out std::unique_ptr<> parameters to take ownership in gtkmm

gtkmm has Gtk::manage(), which roughly means “let the container widget worry about deleting this child widget later”. For instance, you can new a Gtk::Button and call Gtk::manage() on that button before you add() it to a Gtk::Grid. It works well and it’s very simple. Incidentally, all it really does is enable the default behaviour for GtkWidgets – the only clever stuff about Gtk::manage() is how we change the default for gtkmm.

Passing a std::unique_ptr<> to add() could have much the same meaning, and would have the advantage of using standard C++ API and a known convention. I have a patch in bugzilla that does this and there’s been some discussion on gtkmm-list.

However, the result is sometimes hard to love. You have to move the add() to later in your code, to avoid a use after std::move(). And, for some of our API, you really do need to use the pointer right after you’ve add()ed it to a container. So you need to take a temporary “observing” (non-owning) raw pointer before the add().

Currently, the worst part of the gtkmm API for this, which has never felt
quite right, though we can blame GTK+, is this:

auto cell = std::make_unique<Gtk::CellRendererText>();
cell->property_style() = Pango::STYLE_ITALIC

auto cell_nonowning = cell.get(); // yuck
auto column = std::make_unique<Gtk::TreeViewColumn>("something", std::move(cell));
column->add_attribute(cell_nonowning->property_text(), columns.title);
column->add_attribute(cell_nonowning->property_style_set(), columns.italic);

treeview.append_column(std::move(column));

I guess we’ll add the add(std::unique_ptr<>) overloads to allow this, but not deprecate Gtk::manage() until we feel more comfortable with how they make us rearrange our code.

Playing with xdg-app for PrefixSuffix and Glom

xdg-app lets us package applications and their dependencies together for Linux, so a user can just download the application and run it without either the developer or the user worrying about whether the correct versions of the dependencies are on the system. The various “runtimes”, such as the GNOME runtime, get you most of the way there, so you might not need to package many extra dependencies.

I put a lot of work into developing Glom, but I could never get it in front of enough non-technical users. Although it was eventually packaged for the main Linux distros, such as Ubuntu and Fedora, those packages were almost always broken or horribly outdated. I’d eagerly fix bugs reported by users, only to wait 2 years for the fix to get into a Linux distro package.

At this point, I probably couldn’t find the time to work more on Glom even if these problems went away. However, I really want something like xdg-app to succeed so the least I could do is try it out. It was pleasantly straightforward and worked very well for me. Alexander Larsson was patient and clear whenever I needed help.

xdg-app- builder

I first tried creating an xdg-app package for PrefixSuffix, because it’s a very simple app, but one that still needs dependencies that are not in the regular xdg-app GNOME runtime, such as gtkmm, glibmm and libsigc++.

I used xdg-app-builder, which reads a JSON manifest file, which lists your application and its dependencies, telling xdg-app-builder where to get the source tarballs and how to build them. Wisely, it assumes that each dependency can be built with the standard configure/make steps, but it also has support for CMake and lets you add in dummy configure and Makefile files. xdg-app-builder’s documentation is here, though I really wish the built HTML was online so I could link to it instead.

Here is the the manifest.json file for PrefixSuffix. I can run xdg-app-builder with that manifest, like so:

xdg-app-builder --require-changes ../prefixsuffix-xdgapp manifest.json

xdg-app-builder then builds each dependency in the order of its appearance in the manifest file, installing the files in a prefix in that prefixsuffix-xdgapp folder.

You also need to specify contexts in the “finish-args” though they aren’t explicitly called contexts in the manifest file. For instance, you can give your app access to the network subsystem or the host filesystem subsystem.

Creating the manifest.json file feels a lot like creating a build.gradle file for Android apps, where we would also list the base SDK version needed, along with each version of each dependency, and what permissions the app needs (though permissions are partly requested at runtime now in Android).

Here is the far larger xdg-app-builder manifest file for Glom, which I worked on after I had PrefixSuffix working. I had to provide many more build options for the dependencies and cleanup many more installed files that I didn’t need for Glom. For instance, it builds both PostgreSQL and MySQL, as well as avahi, evince, libgda, gtksourceview, goocanvas, and various *mm C++ wrappers. I could have just installed everything but that would have made the package much larger and it doesn’t generally seem safe to install lots of unnecessary binaries and files that I wouldn’t be using. I do wish that JSON allowed comments so I could explain why I’ve used various options.

You can test the app out like so:

$ xdg-app build ../prefixsuffix-xdgapp prefixsuffix
... Use the app ...
$ exit

Or you can start a shell in the xdg-app environment and then run the app, maybe via a debugger:

$ xdg-app build ../prefixsuffix-xdgapp bash
$ prefixsuffix
... Use the app ...
$ exit

Creating or updating an xdg-app repository

xdg-app can install files from online repositories. You can put your built app into a repository like so:

$ xdg-app build-export --gpg-sign="murrayc@murrayc.com" /repos/prefixsuffix ../prefixsuffix-xdgapp
$ xdg-app repo-update /repos/prefixsuffix

You can then copy that directory to a website, so it is available via http(s). You’ll want to make your GPG public key available too, so that xdg-app can check that the packages were really signed by you.

Installing with xdg-app

I uploaded the resulting xdg-app repository for PrefixSuffix to the website, so you should be able to install it like so:

$ wget https://murraycu.github.io/prefixsuffix/keys/prefixsuffix.gpg
$ xdg-app add-remote --user --gpg-import=prefixsuffix.gpg prefixsuffix https://murraycu.github.io/prefixsuffix/repo/
$ xdg-app install-app --user prefixsuffix io.github.murraycu.PrefixSuffix

I imagine that there will be a user interface for this in the future.

Then you can then run it like so, though it will also be available via your desktop menus like a regular application.

$ xdg-app run io.github.murraycu.PrefixSuffix

Here are similar instructions for installing my xdg-app Glom package.

I won’t promise to keep these packages updated, but I probably will if there is demand, and I’ll try to keep up to date on developments with xdg-app.

Android Glom Experiments

Over the past few weeks I’ve been diving into Android development using a semi-realistic project to force me to learn properly. I wrote a rough first version of a read-only Glom database UI for Android, called android-glom. So now there’s a version in gtkmm (C++), Qt (C++), GWT (Java) and Android (Java). It’s a good way to really try out a framework and I’ve really enjoyed doing that with no pressure.

Here are some of my thoughts about the experience. I’d welcome feedback about the opinions I’ve formed and about the code I’ve written.

I’ve been doing this while reading O’Reilly’s Programming Android book which is pretty good. I had already read the Busy Coder’s Guide to Android Development a few years ago, which I also liked, but I didn’t realise until recently that it’s been kept up to date as a subscription, because Amazon only has the old edition.

Android Studio

I tried out Android Studio for this project. It’s not quite officially stable but I already find it preferable to Eclipse even though its UI is only slightly less cluttered than Eclipse. It has never asked me to choose a “Perspective”, which is nice. Like Eclipse, it has refactoring tools so you can move Java code around and rename stuff easily. I miss this now when I use C++.

Databases

Android doesn’t really support JDBC, making sure that you don’t even try to access an external database server on a tablet or handheld that probably doesn’t have a reliable connection. Those database servers are not meant to be exposed directly on the internet anyway. But it’s still annoying that have to use a similar-but-separate Android-specific and SQLite-specific database API (SQLiteDatabase, SQLiteOpenHelper and Cursor) instead, making code less portable.

Luckily I was still able to reuse much of my SQL-building and data-structure Java code from gwt-glom with just minor changes.

Activities and Fragments

Fragments are a fairly new way to let you rearrange your Android UI in various ways for different sized screens or different orientations. For instance a tablet UI might have one Activity that shows a list fragment at the left and a detail fragment at the right, depending on what item is selected in the list. But a handset version of that UI might open a second detail Activity to show that detail fragment, because it doesn’t have a big enough screen to show both at once.

Unfortunately, there are various official examples that use fragments to support both tablet and handet UIs in one app, but they don’t all use the same techniques or API. For instance, the “Building a Flexible UI” documentation instead suggests just replacing the one fragment in the one single activity for the handset UI and using FragmentTransaction.addToBackStack() so the back button works.

However, the Master/Detail new-activity template code used in Eclipse and Android Studio uses the multiple Activity idea (only one Activity is ever used in the Tablet UI), setting an mTwoPane boolean after detecting which XML layout file has been loaded. This works because you can specify separate layouts (or other resources) for Android to use depending on, for instance, screen size or orientation, and you just need to check what it has decided to use.

So this is the system that I’ve used, later checking that mTwoPane boolean when I want to navigate to another part of the UI, either telling the main Activity to do something with one of its fragments, putting the necessary parameters in a Bundle given to Fragment.setArguments() or just using startActivity() to start a new Activity with the appropriate parameters in its Intent. It’s slightly annoying that fragments take a Bundle but Activities take an Intent, with both having very similar but separate APIs.

Content Providers

Many parts of the Android API, such as ListView, ListFragment, and ListActivity, assume the use of a Cursor to access data, which in turn requires that you use a either a SQLiteDatabase or a ContentProvider.

The Android API documentation implicitly pushes you, via deprecation,  to use a  Content Provider, rather than just a SqliteDatabase, to separate your UI and data into separate processes even when you have no need to share your application’s data with other applications, even though the high-level documentation says “You don’t need to develop your own provider if you don’t intend to share your data with other applications.”

Specifically, you can tell your ListView, ListFragment, or ListActivity to show your SQLite database data via a CursorAdapter, such as SimpleCursorAdapter, which takes a Cursor. That Cursor can be the result of a SQLiteDatabase.query() or rawQuery(). But you’ll need to call Activity.startManagingCursor() on that Cursor and that is deprecated (probably because its not asynchronous) in favour of using CursorLoader (by implementing LoaderManager). And that means using a ContentProvider. See the “Running a Query with a CursorLoader” documentation. I wish that the documentation and examples just started off with this clear recommendation.

You can instead implement a custom CursorLoader that uses a SQLiteDatabase directly, but you then lose some functionality such as automatic ListView updating via notification when the data changes. And I think I’ve read of other ListView functionality that only works with a ContentProvider but I can’t find that documentation now. I think it was something about searching or auto-completion.

Content Providers are a bit awkard

Unfortunately a ContentProvider doesn’t provide quite as much loose binding as you’d hope. The ContentProvider API is very much like a SQL API. Most example code seems to just expose the SQLite database structure directly, sometimes with a simple mapping of column names as a thin separation. Given that this is the most common ContentProvider implementation that gets cargo-culted, it seems that it should be a lot less verbose.

(Update: I noticed that the mapping of external column names to internal columns names is useless anyway because you end up exposing the internal column names when your Content Provider’s query() returns the database query’s cursor as your Content Provider query()’s cursor. Client code will often then need those internal database column names to call Cursor.getColumnIndex() to then get the values in the columns. This is only a problem when you don’t specify specific columns, which would then be mapped by SQLiteQueryBuilder.setProjectionMap().)

I also don’t like how this forces so much implementation code to be forced into one ContentProvider API, separated only by switch statements in the query(), insert(), update() and delete() implementations. I’d like an easy way to delegate my various ContentProvider URL implementations to separate classes.

Here’s a link to my ContentProvider to show what I mean. It feels like a mess.

A database as a web service

On the other hand, it’s good that the ContentProvider provides a route to storing your data on the internet instead, maybe just caching it locally, without changing your UI code. For instance, using some RESTful service, whose API would closely match a database API.

In fact, I’d like some easy way to just expose a database on the web, with multiple user-level and table-level access control. I know that developers spend huge amounts of time implementing very specific “business logic” web APIs to hide their underlying databases, and I know that developers rightly fear anyone having direct access to their databases. But couldn’t it be done properly? I guess that most of these systems have much the same code with much the same security mistakes, just to put the names of their database tables and columns behind a few levels of programming language class names, method names and parameter names that provide just a thin sense of security and a token hint at modularity.

I’ve found a few systems that do this, but I have no idea of which ones are trusted and used much.

 

OnlineGlom: MySQL support

Of and on over the last couple of weeks, I have added MySQL support to OnlineGlom, like I recently added to Glom itself. Now it works and is in git master. When I have the time I’ll try it out with Google’s Cloud SQL (MySQL) in Google’s App Engine.

As with regular Glom, most of the work was getting the self-hosting tests to work with MySQL – to start the MySQL instances, create the databases, fill them with data, test them and shut them down. The rest of the support was mostly covered already by JOOQ but I had to make sure it always knew what SQL dialect to use.

OnlineGlom: Slightly Saner Logins

The OnlineGlom demo does not require a login. However, the code does let you set up a server that requires a login, and I noticed that a successful login for one person became a login for everybody else. So after the first login, it was as if no login was required for anybody. Yes, really. Of course, this would not do.

So I fixed that, I think, learning some things about Java Servlet sessions along the way. This text is mostly for my own reference, and so that people can tell me how wrong I am, because I’d like to know about that.

In the server-side code

Java servlets already set a JSESSIONID cookie in the browser, but you shouldn’t try to use that cookie to maintain a login across browser sessions. Instead, I now set and get a custom Cookie, in the server code, using javax.servlet.http.Cookie. HttpSession.getId() conveniently provides a unique-enough session ID for me to use in the Cookie. This page about Cookies with GWT seems to suggest setting the cookie in the client-side JavaScript code, using com.google.gwt.user.client.Cookies, but that sounds rather wrong.

I now store the username and password (Yes, that’s not good, so keep reading), associated with the session ID, in a structure that’s associated with the ServletContext, via the javax.servlet.ServletContext.setAttribute() method. I get the ServletContext via the ServletConfig.getServletContext() method. I believe that this single instance is available to the entire web “app”, and it seems to work across my various servlets. For instance, if I login to view a regular page, the images servlet can also then provide images to show in the page. I’d really like to know if this is not the right thing to do.

However, it still stores your PostgreSQL username and password in memory, so it can use it again if you have the cookie from your last successful login. It does not store the password on disk, but that is still not good, because it could presumably still allow someone to steal all the passwords after a breakin, which would then endanger users who use the same password on other website. I cannot easily avoid this because it’s the PostgreSQL username and password that I’m using for login. PostgreSQL does store a hash rather than the plaintext password, but still requires the plaintext password to be supplied to it. I think I’ll have to generate PostgreSQL passwords and hide them behind a separate login username/password. Those passwords will still be stored in plaintext, but we won’t be storing the password entered by the user. I’d like to make this generic enough that I can use other authentication systems, such as Google’s for App Engine.

To avoid session hijacking, I made the cookie “secure”, meaning that it may only be provided via a secure protocol, such as HTTP. I believe this also means that client (javascript) code is not allowed to read it, so it can only be read by the server via HTTP(S). I did that with the javax.servlet.http.Cookie.setSecure() method, though I had to make a build change to make that available.

The login servlet now checks that it has been called via HTTPS, by using the ServletRequest.isSecure() method, and uses HTTPS when testing via mvn gwt:run. It refuses to do any authentication if HTTPS was not used, logging an error on the server.

In the client side code

I added a check for what protocol is used. If it’s not https then I warn that login cannot work. This does not add any security, but it’s a helpful hint.

Actually, the entire site must therefore be served via HTTPS, not just the login page, or we would violate the Same Origin Policy by mixing protocols, which the browser would rightfully complain about. At this point I noticed that most serious sites with logins now use HTTPS for their entire site. For instance, Google, Amazon, Facebook. This seems like a good simple rule, though I wonder if many projects don’t enforce it just to make debugging easier.

I also converted the popup login dialog into a proper login page, making sure that it takes the user to the desired page afterwards.

Glom: Experimental MySQL Support

Glom uses PostgreSQL and doesn’t try to offer the user a choice of anything else. That’s because it does what Glom needs, there’s no need to confound the user with an incomprehensible choice, and I’ve no wish to maintain multiple sets of code. It’s hard enough keeping up with changes in PostgreSQL, though Glom’s regression tests help.

However, I played around with adding MySQL support as a build-time alternative via the –enable-mysql configure option. The basic stuff now works both in the UI and in the regression tests. Those tests can now run each self-hosting database test with all 3 backends.

This is mostly just so I could learn about MySQL, so I can reimplement it in Java for OnlineGlom. That would let me use Google’s Cloud SQL, which is based on MySQL. The main work has been figuring out how to initialize a MySQL database store on disk and then start and stop MySQL instances. It’s even more funky than with PostgreSQL. I did need an addition to libgda to support non-standard MySQL port numbers but, as usual, Vivien Malerba fixed that for me quickly. There’s also the rather huge problem that AppArmor on Ubuntu prevents us from starting MySQL with anything but the standard database data, and we can’t expect the user to go editing AppArmor config files. At least with MySQL 5.6, it should be possible to start a MySQL instance without it having no password for a few seconds, as is necessary with MySQL 5.5. I need to start and stop custom instances so I can run tests automatically.

I’ve committed it to Glom’s git master, and its in the 1.23.3 release, just in case anyone wants to improve it. There are some TODO_MySQL comments in the tests where we expect something to fail with MySQL. For instance, I have not added support for editing the MySQL users and groups. And there are likely to be problems with keeping data when changing field types, which doesn’t seem to be tested thoroughly for any backend. libgda is also missing some support for binary field types, needed for Glom’s image fields.

Over the years, various people have complained about Glom not using MySQL. Here is your chance to actually work on that, with tests to show if your work is enough.

Online Glom is now all Java

Over the last couple of weeks I have reimplemented just enough of the C++ libglom code as Java in Online Glom‘s gwt-glom, removing the need for java-libglom (which wrapped libglom for Java via SWIG). It’s now working and deployed on the Online Glom test server.

This makes both development and deployment much easier. It also made the source code all camelCase so it’s not offensive to the eyes of Java coders.

To replace libglom’s use of GdaSqlBuilder, I used jOOQ. That worked well, thanks to its maintainer, Lukas Eder, who was very helpful and who quickly added some API that I needed.

Now that the code is all Java I really hope that more people will look over the code and point out anything that can be improved. I still don’t know Java like I know C++ so please don’t be shy about telling me that I have made mistakes.

GWT, Javascript, and serialization

Removing the use of java-libglom let me simplify the code, because I can now send object, such as LayoutItem, to the client without needing to copy it into a separate LayoutItemDTO object that existed just because the original wasn’t serializable, so it could be sent from the server (Java) to the client (JavaScript, compiled from Java).

However, this raised some new issues. I wanted some of the objects to contain some extra cached data, so that the client code did not have to calculate it itself, often by retrieving some other related object. Right now these are extra member variables in the classes, but that prevents me from splitting the code off into a new java-glom library.

Furthermore, any class that is sent between the client and server must fully conform to the requirements of the GWT Java-to-JavaScript compiler, even if that method will not actually be run on the server. For instance, I tried to add clone() methods, for use on the server, but that broke the JavaScript compilation because it doesn’t have an equivalent for Object.clone().

Those restrictions on the Java code that is allowable on the client side (because it will be compiled to JavaScript) were particularly awkward when the compiler (or Maven, or something) refused to give me clues about what was wrong. For instance, it took me 2 frustrating days to fix this small error by breaking the code apart until only the problem code remained. At other times, there was no real error on stdout, but there were clues (in a variety of hard-to-read formats) in the HTML generated by mvn site. Or sometimes, I could see errors when building inside Eclipse, but not outside.

The GWT system works great, but something is inconsistent about how it shows errors, and it can’t be right that some compilation errors only show up when running, rather than when building.

Next steps

As much as I would like to move on to implementing editing, I need to spend some time now on getting some regression tests set up in the maven build. These must create and run temporary PostgreSQL database instances like I do in the Glom autotools build. For instance, I need to check that the new SQL-building code, using jOOQ, is really safe from SQL injection like the libglom code seems to be.

My recent changes also caused the OnlineGlomService async API to be particularly inefficient, sometimes sending far more data back to the server than should be necessary. I will try to avoid that and try to make this API smaller, to avoid so many round trips.

Online Glom: Deployment

Today I deployed the latest Online Glom (gwt-glom) on a new Amazon AWS instance of Ubuntu Precise, again connecting Apache with Tomcat. This time I took notes about exactly how I did it. I wonder if this is something that I should put in a Puppet configuration (I have never used Puppet).

It took me a while to figure this out last time, but now it’s clearer to me. However, I still don’t know how to avoid the need for the /OnlineGlom/ suffix in the URL.

Online Glom: Reports

Over the last few weeks, in occasional spare moments, I have added report generation to Online Glom. It kind of works now, using the regular report definitions from the existing .glom files, and generating reports that look much like the reports in the desktop Glom version. I deployed this at onlineglom.openismus.com/OnlineGlom/ .

I used JasperReports for this, writing some code to map Glom’s report structure to the JasperReports structure. I chose JasperReports mostly based on its popularity. There are various tools and libraries that use it and the JasperSoft company seems to be very successful by supporting it.

However, I will probably replace JasperReports with some custom code instead, because:

  • The JasperReports API is almost completely undocumented. I figured out what to do only by looking at various snippets of code from random places. The project seems to focus on the use of visual tools to build report designs, rather than the use of a generic API by such tools.
  • JasperReports demands absolute positioning, with no automatic layout, and that will never make sense for HTML.

I will try the DynamicJasper wrapper/helper before abandoning JasperReports, but I don’t see how it can avoid the fundamental problem of absolute positioning.