[MonoDevelop] Code analysis soc project
Michael Hutchinson
m.j.hutchinson at gmail.com
Wed Mar 24 00:28:47 EDT 2010
On Tue, Mar 23, 2010 at 6:50 AM, nikhil sarda <diff.operator at gmail.com> wrote:
> Thanks for the reply.
>
> On Tue, Mar 23, 2010 at 11:48 AM, Mike Krüger <mkrueger at novell.com> wrote:
>> Hi
>>> Apparently the current
>>> version of NRefactory is not good enough to detect certain types of
>>> semantic errors. Hence detecting errors such as invalid return types,
>>> incorrect parameters and so on are very difficult to do with the
>>> current NRefactory.
>>
>> These two are possible :) (but anything that needs some more analyzation
>> than a resolve + type check is painful to implement)
>>
>>> This is because there is presently no way to go to
>>> the parent of the node you've no way to go to the parent and top-down
>>> analysis is difficult because "<mkrueger>: because you can't go to the
>>> children of the nodes without knowing the type of the node". Some
>>> things that can be implemented however are naming conventions,
>>> spellings in comments and string literals and so on. Given this
>>> backdrop is there any point proceeding with this project? Or should
>>> one wait for the new DOM to be completed?
>>
>> The new dom currently is almost ready - if you want to work with that
>> it's possible. You just need a customized mono compiler source code &
>> comment out the parser file.
>
> Has it been committed to trunk yet? Also a small introduction to the
> API would be very handy as well :)
That's great to hear!
I'm a huge fan of the idea of on-the-fly analysis.
As I see it, the core part is a framework for background analysers -
an extension point, a service to run the analysers on new
ParsedDocuments and report the errors and warnings, and some basic
configuration UI. This is the initial "barrier" that needs to be
implemented before anyone can write any analysis rules. This could
take quite a few weeks to fully implement and polish. It would also
be nice to have a way for rules to attach "fixes" to their results,
and this would need a UI too. All of this is completely language
agnostic. For example, I might want to write analysers for ASP.NET
documents or XML, not just C#.
Beyond that, there are analysis rules, that scan the parsed documents
and report errors, warnings and suggestions. These rules can range
from trivial to very complex, and some are much more useful than
others.
My favourite example of a rule is an API naming conventions checker.
This would check that any public symbols defined in the file follow
.NET naming conventions, e.g. interfaces begin with I, class and
member names are PascalCase, parameters are camelCase, and all
identifiers are made of CorrectlySpelledWords. This rule would work
for any parsed .NET DOM, not just C#, and would be great for teaching
the conventions to newcomers, but also to catch typos from seasoned
users. It could easily offer quick fixes too, as these would just be
rename refactorings. In fact, quite a few of the framework design
guideline rules could be checked in a language-agnostic way with the
current DOM, since they affect only public API.
Another idea is to show unresolved types and unused usings. This would
be C#-only, and require attempting to resolve all types, and could
offer the existing "resolve" command as a quick fix. Other easy yet
useful rules would be to warn about recursion in properties, error on
trivial (i.e. clearly infinite) property recursion, warn about catch
blocks that completely discard the exception, check that string.Format
format is correct, warn about string concatenation in a loop...
Of course there are *many* more complex rules that could be
implemented, but are beyond the scope of a single GSoC project. The
main thing would be to get the framework implemented and polished,
with some fairly simple yet very useful rules to prove it, which would
make it possible for other people to contribute individual rules
afterwards.
(I didn't mention spellchecking strings and comments because I think
that *might* be better for the text editor to handle based on syntax
highlighting, as it would work for many more filetypes. But
specialized rules could do a better job because they'd have more
context).
- Michael
--
Michael Hutchinson
http://mjhutchinson.com
More information about the Monodevelop-list
mailing list