If you are a parent of small children like myself you can probably relate to a situation that I commonly find myself in. I take our 4 year old girl to the public restroom because, well, she's gotta go. And when we get there we see that the toilet is one of those auto-flushing models, you know, with a light sensor that can tell when an adult has sat down and stood back up and hence flushes automatically. I put emphasis on adult because as you may know this doesn't always work so well with a child, especially squirmy ones. Inevitably, the toilet flushes too soon, and these tend to not be wimpy flushers either. The flushing action can remind one of Charlton Heston as Moses in the Ten Commandments parting the Red Sea. And children find this very disturbing, obviously.
Well, on a recent trip to the "facilities", I came up with a simple hack to get around this problem. Simply take a piece of toilet paper (paper towel would probably work well too) and drape it over the light sensor. The child can then safely use the toilet. After the child has dismounted and everyone is ready for a flushing of biblical proportions, remove the toilet paper from the light sensor, which will activate the flushing, and drop it into the toilet.
Happy parenting!
Wherein I write about stuff that I find interesting. Most of entries will probably be technology related (Java, Web development, Python, Flex, etc.). But I will also write about: my adventures in homeschooling and parenting, reviews of books I've read, and anything else I've learned or seen that I think is cool enough to share.
Monday, December 25, 2006
Thursday, December 21, 2006
Evaluating Grid Portal Security Paper, Review
I just finished reading a paper titled Evaluating Grid Portal Security, by David Del Vecchio, Victor Hazelwood and Marty Humphrey. In it they evaluate GridSphere, OGCE and Clarens against a standard set of security metrics. The conclusion is that there is plenty of "room for improvement". I found their recommendation section at the end particularly helpful. It got me to thinking about things we can do in the projects I work with to make grid portals more secure, and here I try to capture my thoughts.
First, I think in the PURSe/PURSe Portlets project, we should provide a way to configure the strength of the password required when a user creates a new registration, and we should provide a secure setting of this by default out of the box. I created a bug report to track this.
Second, I think one of the most difficult challenges for grid portals is in the area of creating, managing and processing auditing logs. But the authors do provide a simple criterion, that all grid credential accesses be written to auditing logs. However, is this sufficient? It would seem that one would need to audit also all grid services requests (e.g., GRAM and GridFTP calls). Then there is the problem of how to audit the auditing logs. Perhaps there are general purpose tools to make this more feasible. Nevertheless, we are seeing in TeraGrid a strong requirement for this functionality, so we need to come up with a solution.
For the LEAD Portal that I work on, this is complicated by the fact that we do not have the user's grid credentials at the portal level, nor do we make calls to grid services from the portal. Ours is a more distributed architecture, with services communicating asynchronously via a publish/subscribe notification broker. So what we need is an auditing notification topic that all LEAD services could write to as a kind of auditing log. A special auditing listener could be set up to listen to this topic and persist the messages to a file or database.
First, I think in the PURSe/PURSe Portlets project, we should provide a way to configure the strength of the password required when a user creates a new registration, and we should provide a secure setting of this by default out of the box. I created a bug report to track this.
Second, I think one of the most difficult challenges for grid portals is in the area of creating, managing and processing auditing logs. But the authors do provide a simple criterion, that all grid credential accesses be written to auditing logs. However, is this sufficient? It would seem that one would need to audit also all grid services requests (e.g., GRAM and GridFTP calls). Then there is the problem of how to audit the auditing logs. Perhaps there are general purpose tools to make this more feasible. Nevertheless, we are seeing in TeraGrid a strong requirement for this functionality, so we need to come up with a solution.
For the LEAD Portal that I work on, this is complicated by the fact that we do not have the user's grid credentials at the portal level, nor do we make calls to grid services from the portal. Ours is a more distributed architecture, with services communicating asynchronously via a publish/subscribe notification broker. So what we need is an auditing notification topic that all LEAD services could write to as a kind of auditing log. A special auditing listener could be set up to listen to this topic and persist the messages to a file or database.
Tuesday, December 05, 2006
Upgrading MySQL 3.23 to 5.0
I recently had to upgrade a crufty old MySQL database on one of our Solaris machines (rainier) from 3.23 to 5.0. Here's the process I came up with. This is all in the MySQL documentation, but you have to hunt here and there for it, and what is outlined below allows you to do, or at least, test the upgrade while the original server is still running.
Preparation
Downloaded and installed latest 4.0, 4.1, and 5.0 into a directory.
Copied data directory into /usr/local/mysql-data.
Upgrade from 3.23 to 4.0
Started 4.0 pointing at this data directory, on port 3307.
Edit /usr/local/mysql-data/my.cnf to use port 3307 and socket file /tmp/mysql2.sock
cd mysql-4.0
export PATH=$PWD/bin:$PATH
mysqld_safe --defaults-file=/usr/local/mysql-data/my.cnf --user=emysql --datadir=/usr/local/mysql-data --basedir=$PWD --pid-file=/usr/local/mysql-data/rainier.pid
Check the databases:
mysqlcheck --all-databases -u root -p -h rainier -P 3307
mysql_fix_privilege_tables --user=root --socket=/tmp/mysql2.sock --password=xxxxxx
Lots of warnings and errors, but supposedly this is okay.
Didn't need to upgrade ISAM to MyISAM storage engine.
mysqladmin -u root -P 3307 -p -h rainier shutdown
Upgrading from 4.0 to 4.1.
cd ../mysql-4.1
export PATH=$PWD/bin:$PATH
mysqld_safe --defaults-file=/usr/local/mysql-data/my.cnf --user=emysql --datadir=/usr/local/mysql-data --basedir=$PWD --pid-file=/usr/local/mysql-data/rainier.pid
Check the databases:
mysqlcheck --all-databases -u root -p -h rainier -P 3307
mysql_fix_privilege_tables --user=root --socket=/tmp/mysql2.sock --password=xxxxxx --basedir=$PWD
mysqladmin -u root -P 3307 -p -h rainier shutdown
Upgrading from 4.1 to 5.0
cd ../mysql-5.0
export PATH=$PWD/bin:$PATH
mysqld_safe --defaults-file=/usr/local/mysql-data/my.cnf --user=emysql --datadir=/usr/local/mysql-data --basedir=$PWD --pid-file=/usr/local/mysql-data/rainier.pid
mysql_upgrade didn't seem to work, but I think that's because I had run it in an earlier attempt to upgrade from 3.23 to 5.0. So I did the individual steps:
mysql_fix_privilege_tables --user=root --socket=/tmp/mysql2.sock --password=xxxxxxxxx --basedir=$PWD
mysqlcheck --check-upgrade --all-databases --auto-repair -u root -p -h rainier -P 3307
I ran upgrade again, anyways, this time with the force option
mysql_upgrade -p -S /tmp/mysql2.sock --datadir=/usr/local/mysql-data --basedir=$PWD -u root --force
Additional notes:
Setting up the mysql init script. I set the datadir and the basedir, and then I added a --defaults-extra-file=$datadir/my.cnf to the line that invokes mysqld_safe.
Preparation
Downloaded and installed latest 4.0, 4.1, and 5.0 into a directory.
Copied data directory into /usr/local/mysql-data.
Upgrade from 3.23 to 4.0
Started 4.0 pointing at this data directory, on port 3307.
Edit /usr/local/mysql-data/my.cnf to use port 3307 and socket file /tmp/mysql2.sock
cd mysql-4.0
export PATH=$PWD/bin:$PATH
mysqld_safe --defaults-file=/usr/local/mysql-data/my.cnf --user=emysql --datadir=/usr/local/mysql-data --basedir=$PWD --pid-file=/usr/local/mysql-data/rainier.pid
Check the databases:
mysqlcheck --all-databases -u root -p -h rainier -P 3307
mysql_fix_privilege_tables --user=root --socket=/tmp/mysql2.sock --password=xxxxxx
Lots of warnings and errors, but supposedly this is okay.
Didn't need to upgrade ISAM to MyISAM storage engine.
mysqladmin -u root -P 3307 -p -h rainier shutdown
Upgrading from 4.0 to 4.1.
cd ../mysql-4.1
export PATH=$PWD/bin:$PATH
mysqld_safe --defaults-file=/usr/local/mysql-data/my.cnf --user=emysql --datadir=/usr/local/mysql-data --basedir=$PWD --pid-file=/usr/local/mysql-data/rainier.pid
Check the databases:
mysqlcheck --all-databases -u root -p -h rainier -P 3307
mysql_fix_privilege_tables --user=root --socket=/tmp/mysql2.sock --password=xxxxxx --basedir=$PWD
mysqladmin -u root -P 3307 -p -h rainier shutdown
Upgrading from 4.1 to 5.0
cd ../mysql-5.0
export PATH=$PWD/bin:$PATH
mysqld_safe --defaults-file=/usr/local/mysql-data/my.cnf --user=emysql --datadir=/usr/local/mysql-data --basedir=$PWD --pid-file=/usr/local/mysql-data/rainier.pid
mysql_upgrade didn't seem to work, but I think that's because I had run it in an earlier attempt to upgrade from 3.23 to 5.0. So I did the individual steps:
mysql_fix_privilege_tables --user=root --socket=/tmp/mysql2.sock --password=xxxxxxxxx --basedir=$PWD
mysqlcheck --check-upgrade --all-databases --auto-repair -u root -p -h rainier -P 3307
I ran upgrade again, anyways, this time with the force option
mysql_upgrade -p -S /tmp/mysql2.sock --datadir=/usr/local/mysql-data --basedir=$PWD -u root --force
Additional notes:
Setting up the mysql init script. I set the datadir and the basedir, and then I added a --defaults-extra-file=$datadir/my.cnf to the line that invokes mysqld_safe.
Tuesday, November 28, 2006
Integrated Google AJAX Search with LEAD Portal
First I created a LEAD Project custom search engine:
http://www.google.com/coop/cse?cx=001503951656019931001%3Ag2lqfazq_ti
Then I signed up for a Google AJAX search key for portal-dev.leadproject.org. Google gave me some stuff to add, and it worked just fine. I set the web search site restriction to the custom search engine created above:
This all worked fairly well, but I wanted a couple of other things from it. First I wanted to have the search control open up the results over the web page instead of within the web page and rearranging the layout. I found this blog entry at www.cjmillisock.com with a nice use of some simple CSS to get the right effect. Secondly, by default the search results are limited to just one. So I created a GsearcherOptions object with expanded mode set to OPEN (see the above code snippet and also the api documentation).
You can see the result at http://portal-dev.leadproject.org
http://www.google.com/coop/cse?cx=001503951656019931001%3Ag2lqfazq_ti
Then I signed up for a Google AJAX search key for portal-dev.leadproject.org. Google gave me some stuff to add, and it worked just fine. I set the web search site restriction to the custom search engine created above:
var searchOptions = new GsearcherOptions();
searchOptions.setExpandMode(GSearchControl.EXPAND_MODE_OPEN);
var leadWebSearch = new GwebSearch();
leadWebSearch.setSiteRestriction("001503951656019931001:g2lqfazq_ti");
searchControl.addSearcher(leadWebSearch, searchOptions);
This all worked fairly well, but I wanted a couple of other things from it. First I wanted to have the search control open up the results over the web page instead of within the web page and rearranging the layout. I found this blog entry at www.cjmillisock.com with a nice use of some simple CSS to get the right effect. Secondly, by default the search results are limited to just one. So I created a GsearcherOptions object with expanded mode set to OPEN (see the above code snippet and also the api documentation).
You can see the result at http://portal-dev.leadproject.org
Wednesday, November 01, 2006
Adding the portlet.xml schema to Eclipse
Updated 2007-04-12: Fixed the instructions. Seems the old ones don't work any longer.
I finally figured out how to do this. The inferred XML schema support in Eclipse is pretty nice and usually suffices, but sometimes I want to have completion based upon full schema knowledge. Here's how to add the portlet.xml XSD file to Eclipse:
I should note also that this approach works for other XSD files as well. I recently also used these steps to get schema support for Maven2 pom.xml files.
I finally figured out how to do this. The inferred XML schema support in Eclipse is pretty nice and usually suffices, but sometimes I want to have completion based upon full schema knowledge. Here's how to add the portlet.xml XSD file to Eclipse:
- First you need to have the JSR 168 code, so go there and get it.
- For Eclipse you'll need WTP installed. Get Eclipse 3.2 and use Callisto Discovery Site to download WTP as well.
- Okay, now in Eclipse's preference window, go to Web and XML > XML Catalog. Click Add ....
- In the URI field, click the little arrow and select the portlet-app_1_0.xsd file that you downloaded in the JSR 168 release.
- In Key Type field select Schema Location. Then in the Key field, enter the schema location that you will be using in your portlet.xml files. I entered http://java.sun.com/xml/ns/portlet/portlet-app_1_0.xsd.
- Click OK, OK.
- Then you need to make sure you have the following entries in the root element (portlet-app)
in your portlet.xml file:
- [Added 2007-04-12] xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_1_0.xsd"
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://java.sun.com/xml/ns/portlet"- [Updated 2007-04-12] xsi:schemaLocation="http://java.sun.com/xml/ns/portlet/portlet-app_1_0.xsd http://java.sun.com/xml/ns/portlet/portlet-app_1_0.xsd"
- Then if you already have a portlet.xml file loaded you'll need to go to XML > Reload Dependencies. Then you should have tag completion in your portlet.xml files.
I should note also that this approach works for other XSD files as well. I recently also used these steps to get schema support for Maven2 pom.xml files.
Friday, October 27, 2006
MyFaces JSF and GridSphere
Recently I received an email from someone who was trying to create a JSF portlet for GridSphere. This person was looking for some advice. I have only encountered one major problem with developing JSF portlets for GridSphere and that is that in GS 2.1.x MyFaces JSF portlets create invalid default id's for JSF components. The problem is that in this version of GridSphere, RenderResponse.getNamespace() returns an identifier with a "#" character in it, which is an invalid character for JSF component ids. Jason discusses this issue in the gridsphere-dev mailing list.
The simple workaround to this problem is provide id's for each of your JSF components. Here's a snippet of the kind of JSF template that works with GS 2.1.x:
<!-- FIXME: GridSphere "malformed autogenerated id" issue Below several JSF components have been given unique id's to workaround this issue in GridSphere 2.1.x. This bug has been fixed in GridSphere 2.2.x, but we're not currently using 2.2.x. Once we do move to 2.2.x we can remove all id's that are prefixed with "gsid_" -->
<f:view>
<h:form id="gsid_wrapperForm">
<h:outputtext id="gsid_greetingOT" value="Hello, #{Workspace.userFullName}, you are in your Personal Workspace">
</h:outputtext></h:form></f:view>
...
However, that gets very tedious. Fortunately this problem is fixed in GS 2.2.x, but we're not in a position in the LEAD Project to move to 2.2.x at this time (in fact, I'm holding out for GS 3.0). So I've hacked myfaces 1.1.4 to change the "#" character to "_hash_" when it gets such a value in the returned namespace. The modified jar is available in the Extreme Lab maven repository. Also, if you are using Maven 2, you can use:
<dependency>
<groupId>org.apache.myfaces.core</groupId>
<artifactId>myfaces-impl</artifactId>
<version>1.1.4-gs</version>
</dependency>
to specify the dependency, and you'll need to add the extreme repo in your <repositories> list:
<repository>
<name>Extreme Maven2</name>
<id>extreme.repo.maven2</id>
<url>http://www.extreme.indiana.edu/dist/java-repository</url>
<snapshots>
<updatePolicy>daily</updatePolicy>
</snapshots>
</repository>
This has come in very handy now that I've begun working with Facelets, which I am very happy with.
The simple workaround to this problem is provide id's for each of your JSF components. Here's a snippet of the kind of JSF template that works with GS 2.1.x:
<!-- FIXME: GridSphere "malformed autogenerated id" issue Below several JSF components have been given unique id's to workaround this issue in GridSphere 2.1.x. This bug has been fixed in GridSphere 2.2.x, but we're not currently using 2.2.x. Once we do move to 2.2.x we can remove all id's that are prefixed with "gsid_" -->
<f:view>
<h:form id="gsid_wrapperForm">
<h:outputtext id="gsid_greetingOT" value="Hello, #{Workspace.userFullName}, you are in your Personal Workspace">
</h:outputtext></h:form></f:view>
...
However, that gets very tedious. Fortunately this problem is fixed in GS 2.2.x, but we're not in a position in the LEAD Project to move to 2.2.x at this time (in fact, I'm holding out for GS 3.0). So I've hacked myfaces 1.1.4 to change the "#" character to "_hash_" when it gets such a value in the returned namespace. The modified jar is available in the Extreme Lab maven repository. Also, if you are using Maven 2, you can use:
<dependency>
<groupId>org.apache.myfaces.core</groupId>
<artifactId>myfaces-impl</artifactId>
<version>1.1.4-gs</version>
</dependency>
to specify the dependency, and you'll need to add the extreme repo in your <repositories> list:
<repository>
<name>Extreme Maven2</name>
<id>extreme.repo.maven2</id>
<url>http://www.extreme.indiana.edu/dist/java-repository</url>
<snapshots>
<updatePolicy>daily</updatePolicy>
</snapshots>
</repository>
This has come in very handy now that I've begun working with Facelets, which I am very happy with.
Monday, October 02, 2006
Using Maven's deploy:deploy-file to import 3rd party jars
Maven - Guide to deploying 3rd party JARs to remote repository
Usually I just manually copy in third party jars to our Maven repository when they come by, but recently I decided to give Maven's own tools a try. In this example, I'm deploying a MyLEAD jar Scott Jensen has sent my way:
And here, I deploy it again, but this time as a Maven1 type artifact, to support Maven1 clients:
Usually I just manually copy in third party jars to our Maven repository when they come by, but recently I decided to give Maven's own tools a try. In this example, I'm deploying a MyLEAD jar Scott Jensen has sent my way:
mvn deploy:deploy-file -DgroupId=mylead -DartifactId=mylead-crosscut-attr \
-Dversion=1.0 -Dpackaging=jar -Dfile=./mylead_crosscut_attr-1.0.jar \
-Durl=scpexe://rainier.extreme.indiana.edu/l/extreme/java/repository
And here, I deploy it again, but this time as a Maven1 type artifact, to support Maven1 clients:
mvn deploy:deploy-file -DgroupId=mylead -DartifactId=mylead-crosscut-attr \
-Dversion=1.0 -Dpackaging=jar -Dfile=./mylead_crosscut_attr-1.0.jar \
-Durl=scpexe://rainier.extreme.indiana.edu/l/extreme/java/repository \
-DrepositoryLayout=legacy
Monday, September 25, 2006
Using service certificates with Globus Java APIs (jglobus)
Anne Wilson of Unidata recently contacted me about using grid credentials for a long running service. I suggested that she use "service certificates" (definition), but I didn't know exactly how to use them from an application using Java CoG. I looked it up but looks like Anne already has a way to do this or something similar, so I just want to make a quick note about how to do this for future reference.
In jglobus there's the GlobusCredential class. To create a GlobusCredential object from a service certificate (or, more generally, from a certificate/private key pair where the private key is not encrypted), the constructor you use is GlobusCredential(String certFile, String unencryptedKeyFile). From there, it's a little unclear but it seems that you can get a GSSCredential from this by then creating a GlobusGSSCredentialImpl object with the arguments (GlobusCredential, int usage) where it seems a good value for usage would be GSSCredential.INITIATE_AND_ACCEPT.
You would use this approach when you need a service with Globus credentials but you don't want to or can't create a proxy certificate for it over and over again.
In jglobus there's the GlobusCredential class. To create a GlobusCredential object from a service certificate (or, more generally, from a certificate/private key pair where the private key is not encrypted), the constructor you use is GlobusCredential(String certFile, String unencryptedKeyFile). From there, it's a little unclear but it seems that you can get a GSSCredential from this by then creating a GlobusGSSCredentialImpl object with the arguments (GlobusCredential, int usage) where it seems a good value for usage would be GSSCredential.INITIATE_AND_ACCEPT.
You would use this approach when you need a service with Globus credentials but you don't want to or can't create a proxy certificate for it over and over again.
Tuesday, September 05, 2006
Getting commons-logging to behave in Tomcat
Commons-logging is the bane of my existence, and I only use log4j now in all of my new projects. Unfortunately, I don't have the luxury of completely avoiding it since several projects I depend upon use it. Hence, I've found a simple little way to get commons-logging out of my way when working with web applications in Tomcat. The basis of the trick is to get commons-logging to stop trying to auto-discover the log4j configurations I'm using. So I drop a commons-logging.properties file in the top-level of the classloader hierarchy ($CATALINA_HOME/common/classes) that directs commons-logging to use its own built in SimpleLog facility.
Then I add a properties file, simplelog.properties, in the same directory to configure the SimpleLog logger.
Here I've set MyFaces and PURSe, a Grid security library I use, logging levels to DEBUG. Of course, I have to make sure that I have a commons-logging.jar in common/lib. This setup has been working pretty well for me for some time.
org.apache.commons.logging.Log = org.apache.commons.logging.impl.SimpleLog
Then I add a properties file, simplelog.properties, in the same directory to configure the SimpleLog logger.
org.apache.commons.logging.simplelog.defaultlog=warn
org.apache.commons.logging.simplelog.log.org.apache.myfaces=debug
org.apache.commons.logging.simplelog.log.org.globus.purse=debug
Here I've set MyFaces and PURSe, a Grid security library I use, logging levels to DEBUG. Of course, I have to make sure that I have a commons-logging.jar in common/lib. This setup has been working pretty well for me for some time.
Friday, September 01, 2006
Top 10 Coolest Things coming in JSR 286
A while back I posted on the availability of JSR 286 Early Draft 1 (just a reminder that today is the last day to get feedback to the JSR 286 group on this early version of the specification). Recently I finished reviewing it and here is a list of what I found to be the most interesting new things coming in JSR 286.
- Portlet Events - Yes, they are finally here. Portlets can consume and produce events. This is a very important addition because portals can only handle one action request from one portlet at a time, even though other portlets on the page might want to respond to that action. With JSR 286 they will be able to do this. One thing that is unclear from the specification is the scope of portlet events, i.e., are they broadcasted across all portlets in a page, all portlets within that portlet application, or across all of a user's portlets?
- CSS Style Diagrams - Now we can see what the standard CSS portlet styles are supposed to be used for. Okay, it's not a fantastically cool thing, but I'm very happy to see that the JSR 286 group has addressed one of my pet annoyances with JSR 168, that the CSS portlet styles are described only with ambiguous text. And ambiguity makes the standard styles hardly worth the trouble of using, but this is a very important aspect of portability and reusability for Java Portlets.
- Shared Session Attributes - Finally, sessions with portal scope! This seems like the most obvious thing to have in a Java Portlet specification I was dismayed to see if missing from JSR 168. One obvious use case for this would be the loading of session objects that other portlets could use for the purpose of "single sign on" in the portal. In the grid portals I've worked with, we load the user's grid proxy credential at log in and need a way to make it available to other portlets. We had resorted to ad hoc singleton style services higher up in the classloader hierarchy in Tomcat. Now we can have a real solution. One interesting point here is that in Early Draft 1, the JSR members specifically ask for input on whether this feature is needed given that you can accomplish the same sort of thing with Portlet Events. It's my opinion that it should be provided, because sometimes you want user attributes in the user's session that aren't necessarily event oriented.
- Filters - Like their servlet cousins, portlets now have filters. I've yet to think of a need I have for portlet filters, but now that we'll have them, I'm sure some ideas will come to mind.
- Resource Serving - This is a nice feature. This sort of thing is possible in JSR 168 by including servlets with your portlet application that could serve up things like images or JNLP (you know, those WebStart descriptors) documents. But then things were a little bit trickier in that to parameterize those servlet requests you would have to add attributes to the APPLICATION_SCOPE session in your portlet that would later be read by the servlet. Note also that JSR 286 mentions that this would be the way to service AJAX requests.
- Use of Annotations - This Java 5 feature is being used to route Event requests to the appropriate event handling method in processEvent(). Oddly, Early Draft 1 doesn't mention that the same will be done for routing action requests to various action methods. I've done this kind of routing in the VelocityPortlet bridge I wrote, using reflection. The annotation idea looks pretty cool and probably cleaner.
- More Support for AJAX - The focus of Early Draft 1 is "portlet coordination" and "WSRP 2.0 interoperation", so there isn't much that is AJAX specific here, but we are promised that more is on the way (such as state changing resource serving requests).
- Shared Render Parameters - I didn't see this one coming, and I have to admit I'm still trying to figure out how I would best use this. The idea is that when a portlet sets a render parameter this parameter could be shared with other portlets. The outcome of this is that when you select an account to view in the "Accounts List" portlet (invoking a portlet action), the "Accounts List" could set the currently_selected_account render parameter which would cause that account to be highlighted in "Accounts List" (on the render phase). If this render parameter is shared, then the "Account Detail" portlet on the same page could also see this parameter and, since its doView() but not it's processAction() method would be called, it could then update its display with details about that selected account. You can do the same sort of thing with session objects, but this is probably a cleaner way of doing it.
- Portlets aren't just for Portals anymore - I love this quote:
The predominant applications using portlets today are portals aggregating the portlet markup into portal pages, but the Java Portlet Specification and portlets itself are not restricted to portals.
With JSR 286, I think Java Portlets have the potential to really remake the Java web development scene. Portlets can and in many cases should be applied to non-portal environment. - JAXB! - Okay, I'm stretching this top ten list a bit. JAXB is leverage in this specification as a way to define payload data for event and shared session attributes. So it looks like I'll need to learn me some JAXB. If you know a good tutorial, let me know.
Monday, August 21, 2006
Fixing Out of Memory Errors with Eclipse 3.2, Mac OS X
I've recently been getting out of memory errors with Eclipse 3.2 on Mac OS X. Initially, I bumped the maximum heap size to 512MB, but that wasn't sufficient. I then saw that the problem was actually with the PermGen size. So I bumped that up from the default 64MB to 128MB. So far so good.
On OS X, you change these settings by editing Eclipse.app/Contents/MacOS/eclipse.ini. There I set -Xmx to 512m and I added a parameter, -XX:MaxPermSize, and set it to 128m. My complete eclipse.ini file is here:
On OS X, you change these settings by editing Eclipse.app/Contents/MacOS/eclipse.ini. There I set -Xmx to 512m and I added a parameter, -XX:MaxPermSize, and set it to 128m. My complete eclipse.ini file is here:
-vmargs
-Xdock:icon=../Resources/Eclipse.icns
-XstartOnFirstThread
-Xbootclasspath/p:../../../plugins/org.eclipse.jdt.debug_3.2.0.v20060605/jdi.jar-Xms40m
-Xmx512m
-XX:MaxPermSize=128m
-Dorg.eclipse.swt.internal.carbon.smallFonts
-Dorg.eclipse.swt.internal.carbon.noFocusRing
Wednesday, August 09, 2006
PURSe Portlets v. 1.0.1 released
PURSe Portlets 1.0.1 Release Notes
Hot off the press, just put this release together. I got a lot of feedback on the 1.0 release, from within the LEAD project and from folks outside the LEAD project. All discovered bugs, more or less, have been fixed in this release. Enjoy!
Hot off the press, just put this release together. I got a lot of feedback on the 1.0 release, from within the LEAD project and from folks outside the LEAD project. All discovered bugs, more or less, have been fixed in this release. Enjoy!
Tuesday, August 08, 2006
JSR-286 Portlet - Early Draft Review available now
JSR-000286 Portlet - Early Draft Review
It's out. Get it, read it, send feedback. I'll be reading through it myself soon and writing about it here on this blog, so stay tuned.
It's out. Get it, read it, send feedback. I'll be reading through it myself soon and writing about it here on this blog, so stay tuned.
Wednesday, August 02, 2006
Using Ant with Maven 1 and Maven 2
I've been using Maven 1 for well over a year now. Not long after I had started using it and generally liking it, the Maven guys decided to drop support for it and came out with 2.0. Everything changed again. This time the directory layout conventions are different (which completes negates the benefits of having adopted a convention in the first place), and it's more difficult now to just use bits of Ant when you need to. I do not look so kindly on such capriciousness; and there are other complaints about Maven besides. Finally, I decided that although like most developers I get more excited about by build tools than I probably should, in the end, it's just a way to build code. Whether it is Ant or Maven, it's really just overhead; at some point, after having constructed beautiful and clean works of code, you need to actually build it and publish an artifact that others can use. And so, in order to lessen this overhead and get some real work done, I decided to move to the more established, more well-known Ant.
However, I now have quite a few projects using Maven 1. And eventually, I'll probably have to co-exist with Maven 2 projects. What's more, I think the best part of Maven is the dependency management; and it gets better in Maven 2 with support for transitive dependencies. Is there a way to have my cake and eat it too? Indeed there is. In a moment of brilliance, the Maven guys decided to take their golden dependency management code and make it available as Ant tasks. Turns out this stuff actually works. I've been able to
The documentation on the Antlib for Maven 2.0 page is decent. Installing the Ant tasks is fairly straightforward. You add an XML namespace declaration.
Then what I wanted to do was to completely bootstrap the maven dependency by downloading it with the task. For this, I add the following to my "init" target:
Where ${maven.artifact.ant.url} is defined in the build.properties file and could be a remote url or a local "file:///" url. Some people instead check this jar into CVS along with the code and use Maven to download the other dependencies, but I wanted something completely bootstrapped.
Defining Maven Repos
Here's what I used, again, in my "init" method, to define the needed maven repos. Here's what I have:
Basically I have one maven repository that I define 6 different mappings for. This allows me to treat the same Maven repository as a Maven 1 and a Maven 2 repository. The difference between Maven 1 and Maven 2 repository descriptions is in the layout="legacy" attribute. I started by using the scp:// style remote repository deployment, but found that the scpexe:// style actually works better since it uses my local ssh environment, i.e., it uses my ssh-agent and I don't have to type in my password several times just to publish a single artifact.
Defining Dependencies
This part is where Maven 2 comes in. See the getting started guide for help getting going with Maven 2 if you're unfamiliar with it. Basically I created a very simple pom.xml file that contains only the basic info like groupId, artifactId, version, and then a list of dependencies. The Maven Ant tasks provide a way to read in and reference your POM:
Pulling in the Dependencies
One of the "gotchas" I discovered while putting this together was that using these tasks to retrieve artifacts from a Maven 1 repo is time consuming. This is because it tries to retrieve the .pom file for all artifacts, even if the jar in question is already in cache. Since my Maven 1 repository has a lot of artifacts in it that were deployed there with Maven and hence do not have .pom files, Maven spends a lot of extra time trying to download these files (on the order of a minute or two for about 20 dependencies). Hence, I decided to create an Ant target, called maven-dependencies, that would get the dependencies and copy them into ./lib directory. That way I only have to run it occasionally. The downside to this is that I have my compile target, for example, not depending on the maven-dependencies target, but it really should depend on it. Here it is:
Deploying to Maven 1 and Maven 2 repositories
Again, I wanted to use the same repository for both Maven 1 and Maven 2 style artifacts. Deploying is pretty straightforward; here are my targets:
Finally, let's build a portlet!
Building a portlet in Maven was pretty easy; just issue "maven war". Okay, well, not that easy, because it probably wouldn't pick up the right things, and if you support deployment to multiple portals, then that typically means you need to have multiple web.xml files. So, here's my war-gridsphere target for creating a portlet war for the GridSphere portal server.
However, I now have quite a few projects using Maven 1. And eventually, I'll probably have to co-exist with Maven 2 projects. What's more, I think the best part of Maven is the dependency management; and it gets better in Maven 2 with support for transitive dependencies. Is there a way to have my cake and eat it too? Indeed there is. In a moment of brilliance, the Maven guys decided to take their golden dependency management code and make it available as Ant tasks. Turns out this stuff actually works. I've been able to
- use Maven2 to manage dependencies for my Ant builds, with all of their transistive goodness
- publish Maven2 artifacts to Maven2 repos
- publish Maven1 artifacts to Maven1 repos
- use either Maven1 or Maven2 artifacts
- treat my Maven repository as both a Maven1 and a Maven2 repository
The documentation on the Antlib for Maven 2.0 page is decent. Installing the Ant tasks is fairly straightforward. You add an XML namespace declaration.
<project name="ExpBuilder" basedir="." artifact="urn:maven-artifact-ant">
...
</project>
Then what I wanted to do was to completely bootstrap the maven dependency by downloading it with the
<mkdir dir="${lib}/maven" />
<get src="${maven.artifact.ant.url}"
dest="${lib}/maven/maven-artifact-ant-dep.jar" usetimestamp="true">
<typedef resource="org/apache/maven/artifact/ant/antlib.xml"
uri="urn:maven-artifact-ant">
<classpath>
<pathelement location="${lib}/maven/maven-artifact-ant-dep.jar" />
</classpath>
</typedef>
Where ${maven.artifact.ant.url} is defined in the build.properties file and could be a remote url or a local "file:///" url. Some people instead check this jar into CVS along with the code and use Maven to download the other dependencies, but I wanted something completely bootstrapped.
Defining Maven Repos
Here's what I used, again, in my "init" method, to define the needed maven repos. Here's what I have:
<artifact:remoterepository id="extreme.http.maven1"
url="http://www.extreme.indiana.edu/dist/java-repository" layout="legacy" />
<artifact:remoterepository id="extreme.http.maven2"
url="http://www.extreme.indiana.edu/dist/java-repository" />
<artifact:remoterepository id="extreme.scp.maven1"
url="scp://rainier.extreme.indiana.edu/l/extreme/java/repository" layout="legacy" />
<artifact:remoterepository id="extreme.scpexe.maven1"
url="scpexe://rainier.extreme.indiana.edu/l/extreme/java/repository" layout="legacy" />
<artifact:remoterepository id="extreme.scp.maven2"
url="scp://rainier.extreme.indiana.edu/l/extreme/java/repository" />
<artifact:remoterepository id="extreme.scpexe.maven2"
url="scpexe://rainier.extreme.indiana.edu/l/extreme/java/repository" />
Basically I have one maven repository that I define 6 different mappings for. This allows me to treat the same Maven repository as a Maven 1 and a Maven 2 repository. The difference between Maven 1 and Maven 2 repository descriptions is in the layout="legacy" attribute. I started by using the scp:// style remote repository deployment, but found that the scpexe:// style actually works better since it uses my local ssh environment, i.e., it uses my ssh-agent and I don't have to type in my password several times just to publish a single artifact.
Defining Dependencies
This part is where Maven 2 comes in. See the getting started guide for help getting going with Maven 2 if you're unfamiliar with it. Basically I created a very simple pom.xml file that contains only the basic info like groupId, artifactId, version, and then a list of dependencies. The Maven Ant tasks provide a way to read in and reference your POM:
<artifact:pom id="maven.project" file="pom.xml" />That's it!
Pulling in the Dependencies
One of the "gotchas" I discovered while putting this together was that using these tasks to retrieve artifacts from a Maven 1 repo is time consuming. This is because it tries to retrieve the .pom file for all artifacts, even if the jar in question is already in cache. Since my Maven 1 repository has a lot of artifacts in it that were deployed there with Maven and hence do not have .pom files, Maven spends a lot of extra time trying to download these files (on the order of a minute or two for about 20 dependencies). Hence, I decided to create an Ant target, called maven-dependencies, that would get the dependencies and copy them into ./lib directory. That way I only have to run it occasionally. The downside to this is that I have my compile target, for example, not depending on the maven-dependencies target, but it really should depend on it. Here it is:
<artifact:dependencies pathId="maven.compile.classpath"Another gotcha is that although I have dependencies with scope "provided" (more on dependency scope in Maven 2), those dependencies don't get translated into a fileSet or path. The attribute "useScope" here means actually the set of dependencies needed at this scope. So for example my "compile" scoped set of dependencies includes my "provided" scoped dependencies and my "compile" scoped dependencies, since dependencies from both of those scopes are needed to compile my code. Likewise, my "runtime" scoped set of dependencies includes "compile" and "runtime" dependencies since those are needed at runtime (and "provided" dependencies are expected to be, well, provided). I use "./lib/*.jar" as my default classpath in the rest of the build script.
filesetId="maven.compile.fileset" usescope="compile">
<remoteRepository refid="extreme.http.maven1" />
<remoteRepository refid="extreme.http.maven2" />
<pom refid="maven.project" />
</artifact:dependencies>
<artifact:dependencies pathId="maven.provided.classpath"
filesetId="maven.provided.fileset" usescope="provided">
<remoteRepository refid="extreme.http.maven1" />
<remoteRepository refid="extreme.http.maven2" />
<pom refid="maven.project" />
</artifact:dependencies>
<artifact:dependencies pathId="maven.runtime.classpath"
filesetId="maven.runtime.fileset" usescope="runtime">
<remoteRepository refid="extreme.http.maven1" />
<remoteRepository refid="extreme.http.maven2" />
<pom refid="maven.project" />
</artifact:dependencies>
<copy todir="${lib}">
<fileset refid="maven.compile.fileset" />
<mapper type="flatten" />
</copy>
<mkdir dir="${lib}/provided"/>
<copy todir="${lib}/provided">
<fileset refid="maven.provided.fileset" />
<mapper type="flatten" />
</copy>
<mkdir dir="${lib}/runtime" />
<copy todir="${lib}/runtime">
<fileset refid="maven.runtime.fileset" />
<mapper type="flatten" />
</copy>
Deploying to Maven 1 and Maven 2 repositories
Again, I wanted to use the same repository for both Maven 1 and Maven 2 style artifacts. Deploying is pretty straightforward; here are my targets:
<target name="maven2-deploy" depends="jar">Again, I'm using scpexe as the install-provider because this picks up my ssh-agent environment on Mac and Linux workstations. On Windows you might want the scp install-provider, although it should be possible to configure scpexe install-provider to use certain executables, such as PuTTY.
<artifact:install-provider artifactId="wagon-ssh-external"
version="1.0-alpha-5" />
<artifact:deploy file="${build}/${maven.project.artifactId}-${maven.project.version}.jar">
<remoteRepository refid="extreme.scpexe.maven2" />
<pom refid="maven.project" />
</artifact:deploy>
</target>
<target name="maven1-deploy" depends="jar">
<artifact:install-provider artifactId="wagon-ssh-external"
version="1.0-alpha-5" />
<artifact:deploy file="${build}/${maven.project.artifactId}-${maven.project.version}.jar">
<remoteRepository refid="extreme.scpexe.maven1" />
<pom refid="maven.project" />
</artifact:deploy>
</target>
Finally, let's build a portlet!
Building a portlet in Maven was pretty easy; just issue "maven war". Okay, well, not that easy, because it probably wouldn't pick up the right things, and if you support deployment to multiple portals, then that typically means you need to have multiple web.xml files. So, here's my war-gridsphere target for creating a portlet war for the GridSphere portal server.
<target name="war-gridsphere" depends="compile" description="Creates a web application resource file">Well, that wasn't so bad.
<war destfile="${build}/${maven.project.artifactId}.war"
webxml="${webapp}/WEB-INF/web.xml.gridsphere" compress="true">
<fileset dir="${webapp}" excludes="**/web.xml"/>
<fileset dir="${src.conf}">
<include name="*.properties"/>
</fileset>
<lib dir="${lib}/runtime" includes="*.jar"/>
<classes dir="${build}/classes"/>
</war>
</target>
Monday, July 24, 2006
Portals and Portlets 2006 Conference Recap
Just got back from a fantastic conference in Edinburgh, Scotland called Portals and Portlets 2006. You can access the presentations given on this page. It was a great time to get together with fellow developers working on portals and portlets and related issues in the Grid community. Of course, Edinburgh is one of the most beautiful places I've been to, and I stayed at the Radisson SAS right along the Royal Mile.
Here are some things that I've been thinking about since the conference, inspired by presentations and discussions I had there:
Here are some things that I've been thinking about since the conference, inspired by presentations and discussions I had there:
- More than a few times people expressed interest in "portlets without portals". For example, Jason Novotny demonstrated some code he has been working on that would allow bringing a portlet into any web page with just a bit of JavaScript, using AJAX. Previously Jason created the capability to bring a portlet into a JSP page with a custom JSP tag. I've been thinking along these lines as well. It seems the evolution of modern portals, from information aggregators (something akin to a personalized, internet newspaper) to application aggregators to standardized containers, has given us this cool ability to take a web application, treat it like a component, and stick it into a web page. That is, portlets are the cool thing about portals, not the other way around. Portals are now the baggage that portlets have inherited. We now have several portals on the market that do pretty much the same thing, and we inherit their propensity for overbearing layout frameworks that like to cut everything up into rectangle and add lots of icons to them, like "maximize" and "minimize", which I'm pretty sure no user has every seriously used.
The idea I've been toying around with is to take Pluto and wrap it with a custom JSF component, which would be similar to what Jason did with his JSP tag, but I wouldn't need a portal engine, just the portlet container. JSF components are stateful so this component should be able to handle anything that Pluto would normally need a full portal engine to take care of. Couple this with Facelets, and you would have a (nearly) pure XHTML way to create a website, sprinkling in portlets where you see fit. - There was a lot of talk about WSRP. I was unable to find out why anyone would want to use WSRP as it seems to be a solution in search of a problem, but that is just my personal opinion. The consensus of the group seemed to be that WSRP is a good thing to have and that there currently is no decent community (i.e, open source) implementation of WSRP, since the WSRP4J project is still in incubation and it has several bugs besides. Tim Rault-Smith of Sun was there and said that they will be open sourcing the Sun Portal product, including its WSRP stack, so hopefully the community will soon have a decent implementation.
- There were several talks about security, which is a difficult problem in Grid environments. Kurt Mueller talked about GAMA 2, which will be available in the next month or so. I talked about my work with the PURSe portlets. Jipu Jiang talked about using Shibboleth with GridSphere. I had a discussion with Tim Rault-Smith about how we are doing authentication and authorization in the LEAD Portal. Our authorization scheme in the LEAD Portal is based on SAML, using these things called capability tokens. What I dislike about them is that they are time-limited just like Globus proxy certificates, and so we run into the same set of issues we experience with proxy certificates; in other words, they are always expiring at the worst moment. What would be nice is either a more lightweight authorization scheme or perhaps if it were possible to grant a capability to a user limited not in time but in "space", that is, limited to only a particular instance of a service, or limited to a single invocation of the service (which of course implies that the service is somehow stateful).
- Chuck was there, talking about all things Sakai, and it was good to see him again. He wants to make Sakai a JSR 168 portlet container, and I think that's a great idea. I would love to use Sakai this way, I've never been keen on running Sakai as a separate portal and linking into the LEAD portal via iframes.
Thursday, July 06, 2006
Hello World - blogosphere style
I'm doing all things google today. Created a gmail account, started using the google rss reader, playing with google earth, now a blog. Oh yeah, and then there's this firefox extension that will sync up your firefox settings on multiple computers, called Google Browser Sync. I'll install it on my work computer tomorrow and see how well it works.
Subscribe to:
Posts (Atom)