On the advantages of code completion
The code completion feature of modern integrated development environments (IDEs) is extensively used by developers, up to several times per minute. The reasons for their popularity are manifold.
First, usually only a limited number of actions are applicable in a given context. For instance, given a variable of type java.lang.String, the code completion system would only propose members of this class but none of, say, java.util.List. This way, the code completion prevents developers from writing incompilable code by proposing only those actions that are syntactically correct.
Second, developers frequently do not know exactly which method to invoke in their current context. Code completion systems like that of Eclipse use pop-up windows to present a list of all possible completions, allowing a developer to browse the proposals and to select the appropriate one from the list. In this case, code completion serves both as a convenient documentation and as an input method for the developer.
Another beneficial feature is that code completion encourages developers to use longer, more descriptive method names resulting in more readable and understandable code. Typing long names might be difficult, but code completion speeds up the typing by automating the typing after the developer has typed only a fraction of the name.
On the limitations of code completion
However, current mainstream code completion systems are fairly limited. Often, unnecessary and rarely used methods (including those inherited from superclasses high up in the inheritance hierarchy) are recommended. Current code completion systems are especially of little use when suggestions are needed for big (incoherent) classes with a lot of functionality that can be used in many different ways.
For illustration, consider the code snippet depicted in listing above. Let’s assume that the developer triggered code completion on the swtTextWidget variable in line 3. Take a break and think about what code completion should offer to the developer in this context…
Well, let’s see what it actually offers:
164 methods seems to be too much, right? We don’t need methods like getMonitor or removeDisposeListener here. But before continuing the discussion (and to make clear that I’m not bashing SWT :-)), let’s see how code completion looks like for SWING classes like, say, JButton, after we invoked the constructor on such an instance:
One can’t see an improvement, right? But let’s get back to the example.
Because of the overwhelming number of proposals, we looked at the source code of several Eclipse plug-ins to see how developers actually use instances of SWT Text in their code. Thereby we observed that developers rarely used more than five methods of Text (in the context of IDialogPage#createControl()) and typically only setText, setLayoutData, addModifyListener, and setFont. Thus, the remaining 160 methods unnecessarily bloated the code completion window making it hard for developers to see what is actually relevant for their task at hand.
Can we reduce the clutter in code completion?
The example above outlines the basic problem of current code completion systems. Our solution to this problem - a context-sensitive code completion - is shown in below. Instead of presenting all 164 potentially callable methods on the text variable, code completion only presents the most likely ones in this context to the user, i.e., setText, setLayoutData, and addModifyListener:
In a (very small) nutshell, the tool works as follows: (i) we grabbed a few Eclipse plug-ins, (ii) looked on how developers actually used Text widgets in their code, (iii) created a database from these usages, (iv) built an intelligent code completion engine that (every time code completion is triggered) looks up the appropriate methods a developer may be interested in, and (v) presents them to the user.
Of course, we Text is not the only class we examined. We created databases for Eclipse SWT, JFace and some parts of Eclipse UI – and if you are interested to check out the tool, visit our project page and see a demo screencast showing the Eclipse Code Recommenders tool in action.
Disclaimer ;-)
The intelligent code completion feature of the Eclipse Code Recommenders tool is a research prototype. And as such, we appreciate your comments about the idea, implementation issues and other cool ideas.
So, what’s your feeling about having such an intelligent code completion? Do you think having this in Eclipse would be a cool (and helpful) feature? Should we start a large-scale training for more Eclipse Frameworks?
All the best,
Marcel
This is awesome. I can see myself using this all the time. I'd like to see it as the default for completions, then either pressing Ctrl+Space a second time or beginning to type a method that exists but isn't in the short list could show the full list of methods, reduced to methods that match what the user is typing.
ReplyDeleteYes! Definately cool. Maybe you can even refine this approach by not only evaluating the amount of use of methods, but also by looking at the context _around_ the code you are writing. This is a little more complicated as you somehow need to classify dependencies between methods - some method calls may have an order between them while others don't. This may give even more realistic results.
ReplyDeleteJust to extend a little on this: i think technically what you need is a partial order between the last few (and next few) statements where you throw away stuff that is somehow not part of the standard API or Eclipse API. Then you define something like a distance metric to prioritize the matches. Not sure how well this works, but you say it is research after all :-)
ReplyDeleteIndeed this is a very cool and innovative feature both for newcomers and power users.
ReplyDeleteIs there a possibility of combining the scoring with other aspects like the code context (i.e. on a text widget two setText() in a row normally doesn't make sense) or the scoring of Mylyn?
For framework developers it would be nice to add the generated statistics into the byte or source code as Java annotations to the methods.
Thanks for your feedback!
ReplyDeletehowlger,
so far we ignore whether a method is called only once or several times. One reason for that is that we thought that calling setText twice does not cause any problems at runtime, and thus, should not produce a warning.
But you are right. There are several situations where such a recommendations could help to detect API misuses. For instance, calling stack.pop() if no stack.push() has been called before will cause an exception at runtime. A smart code recommender could detect this and create warn markers for that problem directly inside your IDE.
Actually, we work on a bug detector that discovers "strange" API usages (e.g., 100% of all users called setControl() in DialogPage.createControl() - why didn't you?") and creates warning like PMD or Findbugs in Eclipse. The algorithms work and already found some problems in the current Eclipse codebase (see https://bugs.eclipse.org/bugs/buglist.cgi?query_format=advanced;emailreporter1=1;email1=monperrus;emailtype1=substring)
Currently, we work on the Eclipse integration of the bug detector and will post a set of screenshots as soon as the tool is ready to use.
Regarding your general question for context: Currently, we consider the methods already invoked on the variable and the method context we are in. For instance, the system makes different recommendations for PreferencePage.createContents and PreferencePage.performFinish (i.e., here it recommends text.getText() only). Taking into account much more sophisticated contexts could further improve the quality, I agree and is worth looking at.
Concerning your documenation/generated statistics:
Nice idea, we should think about generating typical usage statistics and generate some extendend Javadocs for this. We could integrate such kind of information into our extended Javadoc View here: http://code-recommenders.blogspot.com/2010/03/problem-of-incomplete-javadocs.html
bnz,
ReplyDelete> what you need is a partial order between the
> last few (and next few) statements where you
> throw away stuff that is somehow not part of
> the standard API or Eclipse API.
Right, the order between method calls may carry interesting information. We will conduct some experiments that evaluate how taking into account the order influences the results... Thanks :)
To get some "real results", however, we need people that provide us some real data on how they use (intelligent) code completion. We work on a small tool that collects the information on how developers used Eclipse APIs to learn where our code completion fails.
Just a general question: Could you imagine to provide data on how you use Eclipse APIs for research (assuming that it would not require you to take any further actions) to improve code completion?
Cheers,
Marcel
I think the key to data collection is 1) anonymization and 2) transparency. Anonymization by not collecting data that contains person or project related data or any kind of API usage that may provide the possibility to correlate data to find out what project it is or who the author might be. Transparency by telling the user when you intend to send data and by optionally displaying to the user _what_ exactly you intend to send.
ReplyDeleteBut honestely, I don't believe that data collection is necessary for this kind of work and my guess is that you will still have trouble to collect a reasonable amount of data to derive empirically useful information. I would concentrate on mining the source code repositories (on API side and API usage side) to extract the data that you need and derive the usefulness of the approach from source codes on the API usage side, e.g., by pretending to remove certain statements within the code and how likely it will be that the intelligent code completion actually recommends the statement that has been removed given nothing, the first number of characters, and so on. You can do the approach validation using this method on a pretty grand scale.
Nice work.
ReplyDeleteThis would be especially useful in our work with Groovy-Eclipse. In Groovy-Eclipse, we do some simple type inferencing to statically determine the type of the completion expression (Groovy is a dynamically typed language). But, even with this inferencing, the list of possible completions is extremely long. The useful completions are often hidden. We need to do some better analysis and this could point us in the right direction.
Hi Andrew,
ReplyDeleteif I had the time.... I would love to create such a system for dynamic languages too :)Initially, I had python in mind. But I can imagine to examine how other languages will perform - with some help on the required static analysis for Groovy...? What kind of type inference do you actually implement?
The static analysis for Groovy-Eclipse is really quite simple (it has to be since it runs on every keystroke). The inferencing engine just walks the AST and remembers assignment statements. Also, there are extension points for third party plugins to add their own custom inferencing.
ReplyDeleteIf you're interested, here is a high level description on how it works:
http://contraptionsforprogramming.blogspot.com/2009/11/how-type-inferencing-for-groovy-in.html
In some ways, inferencing in Groovy is easier than in many other dynamic languages since the use of JDK classes (with a well-known type signature) is prevalent.
Great work!
ReplyDeleteBad idea: I often use code completion to browse what methods are available, as a quicker alternative to javadoc, it would be frustrating to have some of them hidden based on an arbitrary heuristic.
ReplyDeleteWhy not emphasise the frequently used ones, or methods not inherited from superclasses (IDEA puts them in bold at the top of the list) and have the others available but less distracting (I think IDEA has a limited-size list with a more... link at the bottom)
Hi Rance,
ReplyDeletealthough not shown, the Eclipse implementation actually allows you to show the most likely, say, 3 recommendations on top of all other proposals and displays all other proposals below (like Mylyn or Idea does). That's exactly the behavior you expect :
Good job, I am waiting for a final version ;) !!!
ReplyDeleteMichal, we won't stop until done :) However, the prototype is available on the project homepage (if you didn't knew it already) and works with SWT, JFace and some parts of Eclipse UI.
ReplyDeleteHowever, we have three people working on a "final" version. But "final" in this case might not be the right word for what we have in mind :-)
Just to outline the solution we work on: Collecting the data we use to train our recommender system is very time-consuming and example applications (for _all_ frameworks people use) are really hard to find.
We are currently working on a "collaborative" approach where the knowledge about how we (developers) use an API is tracked inside our IDE and shared with others. With such an approach we are able to continously learn how to use virtually every framework and create a self-updating, large-scale system for Eclipse - free for everyone to use.
Agreed, this may sound crazy and not everyone might participate in such a system (I assume that no-one would reject their use). However, in the time of Web 2.0 where everyone is crowd-sourcing, blogging, and sharing personal data, such kind of data sharing (made right) is a great way to build intelligent code completion systems that continously improve themselves with every new usage.
If this is working - at least similarily well as our current approach - then we are close to "final" :)
Hmm, would Eclipse host such a system/project?
BTW: there are many more ideas in the pipeline:
See http://code.google.com/a/eclipselabs.org/p/code-recommenders/wiki/OngoingProjects
for a (very brief) summary of what's going on.
The next project we will announce is a pretty cool example codesearch engine integrated into Eclipse... but more that when the UI is ready for a first showcase (probably somewhere in July) - we will see... :)
Hi,
ReplyDeleteI'm glad your eclipse project proposal was successful. We really do need a "Smart" IDE, this will boost productivity awesomely! :)
I did not have time to read through all the comments here... So if I am mentioning anything already discussed, my apologies.
I guess that calculating relevance through examining repositories, collecting usage data or otherwise, is a sound approach. However, I believe this approach is limited by the fact you can only scan this much repositories / get this much usage reports, and sadly by the fact not all repos are public.
It will be great if this tooling has means (i.e. extension points) to extend the ruleset for completion relevance. Two things can be achieved this way:
1) a com.myproprietaryapi vendor can contribute a ruleset for the api usages - like sort of an interactive documentation;
2) the very eclipse community can contribute simple yet very useful rules, perhaps as an extension on top of JDT. Example: since I am coding in java, it is almost always the case that when i say List, i mean java.util.list and not java.awt.list, sun.internal.bla.list and whatever else is on my classpath. But I always get the irrrelevant suggestions first, because of the lexicographical ordering.
Don't want to get too much in the tech details, but perhaps a rule contribution can override another rule contribution. If i am inside ui-dealing code, then perhaps awt.List is a sensible proposal after all.
...
Just some raw ideas,
Dimitar
In addition to the previous post - what I am basically suggesting is that one of the forms of the "Platform for developers sharing knowledge" you are talking about, be the extension point mechanism.
ReplyDeleteThanks Dimitar.
ReplyDeleteJust brief answers on the blog now - maybe we an discuss things in more details on the Eclipse mailing list/forum whenever it is set up.
regarding (1): I agree with you that we might not achieve the coverage required by looking on example source code only. Maybe providing some rule sets and a corresponding grammar to specify them might a candidate solution. We have two ideas we are currently evaluating: (i) a (xtext-based) language for specifying API usage rules and guidelines, and (ii) something like a 'market place' (or app store) where vendors can provide and share their guidelines, recommender models etc. The main challenge is to come up with an appropriate way to specify such guidelines and how to deal with conflicts as you mentioned... There is currently one student who evaluates some ideas on this direction. We will post the ideas a soon as they pass our first checks ;-) Maybe we could work out things in the community.
regarding (2):
As an initial step in this direction we will come up with a code snippet store that allows developers to specify and share code templates as well as some constraints when they are applicable. We are making progress on this topic too. With some luck we can give you a sneak preview of one such tool in the next week. Check the next posts on this blog...