Getting actual Android code running on your computer

So far in the life of MinimalBible the whole testing situation has been... eherm... convoluted. I've previously outlined why I replatformed the application, how managing the Application lifecycle makes testing difficult, and finally how I've gotten non-emulator testing set up to report code coverage.

One of the things I've wanted to do for a while, but proved difficult after many long nights with StackOverflow and other technology sites was get the Robolectric project working with my code. Robolectric is basically a re-implementation of the Android SDK designed for easy testing. So instead of having to run Android tests on an actual device, you can skip that.

Quick caveat: Don't write UI tests using Robolectric. Robolectric is great for running small tests that have to use Android API's, but should not be used to validate that your UI code is working. You won't have an emulator or other device to make sure you actually wrote the tests the right way.

And with that out of the way, let me get into what it took for me to get Robolectric up and running for MinimalBible. I imagine there will be other people with similar issues, so it's good to go ahead and talk about. All the code this post is based off of can be found at commit 71fb362ffe.

Assuming you have the project set up as discussed in my previous post (which was inspired by this), the build.gradle file change is pretty simple. Just add the following dependency:

dependencies {
     testCompile 'org.robolectric:robolectric:2.+'
}

We need to add a special file called project.properties to the <app_name>/src/main folder with this content:

# suppress inspection "UnusedProperty" for whole file
android.library.reference.1=../../build/intermediates/exploded-aar/com.android.support/appcompat-v7/21.0.3
android.library.reference.2=../../build/intermediates/exploded-aar/com.android.support/support-v4/21.0.3

The "suppress" line at the top is so that Android Studio doesn't warn you that the properties are unused.

The Activity we're actually going to be testing is pretty simple too (code here):

public class BasicSearch extends BaseActivity {

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_search_results);

        handleSearch(getIntent());
    }
}

So now that the Activity is set up, we need to build a simple test case. Let's do something like this (code here):

@RunWith(RobolectricTestRunner.class)
@Config(emulateSdk = 18, manifest = "../app/src/main/AndroidManifest.xml")
public class BasicSearchTest {
    @Test
    public void testBuildActivity() {
        BasicSearch activity = Robolectric.buildActivity(BasicSearch.class)
            .create().get();
        assertNotNull(activity);
    }
}

This test is just responsible for getting the Activity started, and will fail if there are any issues during startup. Please note that this also includes the Application getting started as well, so it may throw off your coverage metrics.

By any means, we're now done with the hardest part. With any luck, the test will run, and you'll get a nice green bar telling you that the test was successful.

There's one quick caveat: When you run tests with Gradle, all tests are run in the application test project folder. In my case, that's the app-test folder. This is important, because Android Studio by default likes to run tests in the project root directory instead. So if the tests work with Android Studio, and not with Gradle, that's likely the issue.

But if it was actually that easy, it wouldn't be worth yet another post.

Robolectric + AppCompat - The deadly duo

As of right now, Robolectric has very limited support for Android Lollipop (SDK 21), and most specifically the appcompat libraries it ships with. Unfortunately it appears that the ActionBarDrawerToggle is causing some rather hairy problems that aren't easy to figure out.

So if you're trying to get Robolectric working on a Lollipop app, here are my tips from the school of hard knocks:

  1. You can only test Activities that do not use the ActionBarDrawerToggle. This thankfully means that Fragment testing will work just fine if you set them up maybe outside the original intended Activity.

  2. Please make sure that all tests receive an @Config(emulateSdk = 18) until this issue is closed.

  3. Please make sure that Robolectric picks up the support libraries when testing. If they are noticed correctly, the test will output text which includes the two lines below:

DEBUG: Loading resources for android.support.v7.appcompat from ./../app/src/main/../../build/intermediates/exploded-aar/com.android.support/appcompat-v7/21.0.3/res...
DEBUG: Loading resources for android.support.v4 from ./../app/src/main/../../build/intermediates/exploded-aar/com.android.support/support-v4/21.0.3/res...

And at this point if you've followed all the tips, you should be good to go!

Summary

So now that we've gone over how to get Robolectric working, and some of the tricks you need to get it working in Lollipop, go forward and write tests. Who knows, it might bump up your coverage metrics 11%.

Comment below if you have any questions, I'll do what I can to help!

Empirical Development

For a while now, I've been wanting to get code coverage working with MinimalBible, and it's finally at a point that I'm mostly satisfied with. I certainly wouldn't argue that it's in a "good" state, but it's enough for now. So, I wanted to write a post outlining how this was all set up since I haven't been able to find many people using this style. Plus, it took a very long time to set up, so I hope I can save someone else some pain in the future!

Before I get too much farther though, let me explain the concept of code coverage. Code coverage is intended to answer the question of "What code have I written tests for?" This allows you to quickly spot code that is untested by your existing suite, and let you know where is best to focus your time. Coverage statistics are generated line-by-line and branch-by-branch - if your test doesn't execute a specific branch of code during the test, your coverage system will let you know that not everything is "covered" by a test. All said, you get an easy way to see what potential problems exist and proactively solve them.

Test Setup

So, onward to how I set up testing in MinimalBible. First things first, I have to say that I really changed how I did testing in Android. Instead of using the existing Android build tools, I'm using pure-Java testing. Nothing Android. You can find the setup process here (massive shoutouts to Blundell Apps, this has been incredibly useful) but I'll give a quick overview of how this is set up.

The basic principle is that the testing support in Android is so broken as to be useless. I won't go into all my gripes, but suffice to say I have had issues with the JUnit API (Android uses something below 3.8), code coverage, and UI tests with Espresso. So instead of trying to use the native Android tools for testing, we create a new project based on the source code of our existing project. This allows us to run everything in a native Java environment, without worrying about any of the platform considerations. If you're interested in what that setup would look like, you can find my version here.

There are two projects: app and app-test. App is where the actual source code resides, and App-test is where the test code resides. However, the test code includes the original code as a dependency so that we can still write tests against it. I've included the important bits of app-test/build.gradle below:

apply plugin: 'java'

def androidModule = project(':app')
dependencies {
    compile androidModule
    testCompile androidModule.android.applicationVariants.toList().first().javaCompile.classpath // 1
    testCompile androidModule.android.applicationVariants.toList().first().javaCompile.outputs.files // 2
    testCompile files(androidModule.plugins.findPlugin("com.android.application").getBootClasspath()) // 3
    testCompile 'junit:junit:4.+' // 4
    testCompile 'org.robolectric:robolectric:2.2'
}

And a quick breakdown of what's going on:

  1. Make sure to include all dependencies of the actual project in the test project
  2. Include all code for the actual project in the test project.
  3. Include all dependencies of the android Gradle plugin in the test project. Not sure why this is here.
  4. Finally, include all the other libraries and things we actually need for testing.

So at this point we have a test-ready project setup. We're not quite ready for code coverage yet, but we're about to change that.

Next steps: Enabling Code Coverage

So, we have a test project set up to run the tests we put inside it. The next step is setting up Jacoco to report on what code is being tested during the test suite. I'm going to present the solution first to make it easy on anyone reading this - if you want a full explanation of what's going on check it out here.

In order to enable Jacoco testing we need to change app-test/build.gradle to look like this:

apply plugin: 'java'
apply plugin: 'jacoco'

def androidModule = project(':app')
def firstVariant = androidModule.android.applicationVariants.toList().first()

def testIncludes = [
    '**/*Test.class'
]
def jacocoExcludes = [
    'android/**',
    'org/bspeice/minimalbible/R*',
    '**/*$$*'
]

First steps first, this is the easy part. We're defining what files we want to include in testing, alongside the files we want to exclude from the final report. For example, we exclude the "R" file, since it's all generated code. In addition, anything containing a "$$" is generated by Dagger/Butterknife, so we ignore those too.

If you want to adapt the solution I'm outlining here to your own project, these should be the only sections you need to edit.

The next section is a whole lot more complicated:

dependencies {
    compile androidModule
    testCompile 'junit:junit:4.+'
    testCompile 'org.robolectric:robolectric:+'
    testCompile 'org.mockito:mockito-core:+'
    testCompile 'com.jayway.awaitility:awaitility:+'
    testCompile 'org.jetbrains.spek:spek:+'
    testCompile firstVariant.javaCompile.classpath
    testCompile firstVariant.javaCompile.outputs.files
    testCompile files(androidModule.plugins.findPlugin("com.android.application").getBootClasspath())
}
def buildExcludeTree(path, excludes) {
    fileTree(path).exclude(excludes)
} // 1
jacocoTestReport {
    doFirst {
        // First we build a list of our base directories
        def fileList = new ArrayList<String>()
        def outputsList = firstVariant.javaCompile.outputs.files
        outputsList.each { fileList.add(it.absolutePath.toString()) }
        
        // And build a fileTree from those
        def outputTree = fileList.inject { tree1, tree2 ->
            buildExcludeTree(tree1, jacocoExcludes) +
            buildExcludeTree(tree2, jacocoExcludes)
        }
        
        // And finally tell Jacoco to only include said files in the report
        classDirectories = outputTree
    } // 3
}
tasks.withType(Test) {
    scanForTestClasses = false
    includes = testIncludes
} // 2
  1. Define a quick function that will exclude a list of file paths from a given path.
  2. Set up Gradle to run the tests we defined earlier
  3. Set up Jacoco to exclude the paths we specified earlier from the report. This step is so complicated because we have to get the outputs paths from the Android project, and exclude our paths from each of those.

Wrapping Up

So given the above build.gradle file, we now have a project capable of testing your actual application code and producing coverage statistics on it. While I haven't outlined it above, because the testing code is separate from the Android project, you're free to write your tests in JUnit, Spock, or Spek. I'm going to be using Spek moving forward.

We can include tests using the testIncludes list, and make sure that classes don't get reported using the jacocoExcludes list. All said, that's what we were out for in the first place, so I'll call it a success.

If you want to take this solution further, the next step would be to add Robolectric tests into the suite, but I've been having issues with that too.

Appendix: Jacoco Full Explanation

In order to fully understand what's going on with how Jacoco excludes things from reporting, we have to step back and take a visit to Gradle first to understand your build lifecycle.

Gradle: Configure, Run

Gradle is an incredibly powerful tool, but it is massively confusing if you don't already know what you're doing. In my opinion, the documentation is still missing many examples that would be super-helpful, and is generally dense to try and get through.

That aside, to understand what's going on, you must understand that the Gradle build process happens in two phases: Configuration, and then Build.

For our purposes, you don't need to understand what each one does, but understanding the semantics is crucial. Because there's a two phase build, we can't write a build.gradle that tries to exclude files from Jacoco like this:

jacocoTestReport {
    // First we build a list of our base directories
    def fileList = new ArrayList<String>()
    def outputsList = firstVariant.javaCompile.outputs.files
    outputsList.each { fileList.add(it.absolutePath.toString()) }
    
    // And build a fileTree from those
    def outputTree = fileList.inject { tree1, tree2 ->
        buildExcludeTree(tree1, jacocoExcludes) +
        buildExcludeTree(tree2, jacocoExcludes)
    }
    
    // And finally tell Jacoco to only include said files in the report
    classDirectories = outputTree
}

Did you notice the difference? In the second example, we're missing the doFirst closure. Keep this in mind during the next sections.

Under the hood, Jacoco reports on all classes specified in the classDirectories variable. So, all we need to do is make sure that we include all the classes to report on in classDirectories, and exclude the ones we don't want to see.

However, if you skip the doFirst closure, you'll be in deep trouble. Without that closure, Groovy will run the code in the jacocoTestReport closure before testing is actually run, since it will be in the configuration build phase.

What the code actually does is exclude everything in jacocoExcludes from the global class path. This isn't a great solution, but I'm not sure how else to do it.

The problem comes when you exclude files like the android package that we don't want to report on, but are needed for testing. When things in android aren't loaded during the tests, you'll get lots of nasty NoClassDefFoundException exceptions, because Java can't find the code it needs for testing.

The solution? We need to modify the class path only right before Jacoco runs. This way, the tests are allowed to run successfully, and Jacoco never knows about those classes.

To do this, we need to move the class path configuration into the build phase instead of configure. The way to do that? You guessed it, surround the code in a doFirst closure.

So the end result is that we can exclude specific classes from reporting without interfering in the test setup process. It took me forever to figure out how exactly to implement this, but I hope this can help someone avoid the same issues in the future.

Side note: Much of the above solution was adapting the procedures outlined here to the world of Android. Thanks to everyone for putting in the effort to make it easier for me!

Disclaimer: The ideas presented below were sparked by a presentation available here. Please, please check this out if you get the opportunity, this has been the single most influential 35 minutes in my programming life so far.

Over the past couple of weeks I've been refactoring some of the MinimalBible codebase to use Kotlin more extensively. It's a fantastic language that I'm using to take the place of Java development. Much of the existing code is being replaced with functional programming yielding a clean codebase that integrates well with RxJava.

What kicked off some of this refactoring was the talk referenced above and an online class I've been taking on Scala. Functional Programming is strange at first, but I've been falling in love with it. Thinking recursively is downright weird, but the quote I found that describes functional programming best for me is this:

Functional programming is like describing your problem to a mathematician. Imperative programming is like giving instructions to an idiot.

--- arcus, #scheme on Freenode

So what does functional programming have to do with testing?

Splitting the Core

At the heart of the presentation above is a distinction between what the author deems the core of a program as opposed to its shell. The idea is that a program should be constructed in such a way that its core is purely functional, but its shell can be imperative.

What a program is

At the heart of a program is decision making. You have some form of inputs, whether a database, JSON stream, or hard-coded values. The goal is to produce some meaningful output, whether a database, JSON stream, HTML page, etc. How do you get from one point to the next?

However you do it, decisions need to be made. Maybe you only need to display the records that were created yesterday. Maybe behavior switches in a mobile application depending on whether you are connected to WiFi. All of these decision points create different possible outputs.

So the trick in testing code is to make sure that given the correct inputs, you get the proper outputs. This characteristically involves setting up an in-memory database instead of a real one, simulating a browser connecting to a server, and many different mock objects used to simulate the real things.

This often ends up leading to tests that take many times longer to set up the environment than to actually run your tests. Instead, what about tests that require no environment set up at all?

Boundary Values

What I'm proposing sounds kind of crazy at first, but you can structure your app(lication) to need no environment setup whatsoever. The idea is that you separate values from how you get them. For example:

I have code in MinimalBible that needs to reload books from a server every 30 days, but only if you're on WiFi. If we split out the values from how they're obtained, you can do something like this:

int secondsInThirtyDays = 155520000;
   
// Functional Core
public boolean doRefresh(Date currentDate, Date refreshDate,
        Int networkState) {
    if ((currentDate.getTime() - refreshDate.getTime())
            > secondsInThirtyDays &&
        networkState == ConnectivityManager.WIFI)
            return true;
    else
        return false;
}
   
// Imperative Shell
public boolean doRefresh() {
    return doRefresh(
        new Date(),
        SharedPreferences.get("lastRefreshDate"),
        ConnectivityManager.getNetworkState());
}

The functional core is the only code we actually need to test. There are four possible branch conditions we need to test, and they're all very easy to test:

Date currentDate = new Date();
Date shortDate = new Date(secondsInThirtyDays - 1);
Date longDate = new Date(secondsInThirtyDays + 1);
 
int wifiState = ConnectivityManager.WIFI;
int nonWifiState = ConnectivityManager.WIFI + 1;
 
assertFalse(currentDate, shortDate, wifiState);
assertFalse(currentDate, shortDate, nonWifiState);
assertFalse(currentDate, longDate, nonWifiState);
assertTrue(currentDate, longDate, wifiState);

So now we have a test suite that 100% covers our code and guarantees exactly what we want. We haven't had to mess with the system clock, haven't had to mess with network state, and there are exactly 0 mock objects.

We retain all functionality we originally intended - we can call doRefresh() in our application without having to worry about network state or current time. And we no longer need to write tests for doRefresh() - there's really not anything to go wrong, it just handles interfacing with the external API's.

Scaling Up

So far the only example of this principle I've given has been pretty trivial. The principle extends way beyond that example though. At its heart, the idea is to separate your logic from any external considerations. Then, when your logic is liberated this way, you are free to wire up the pieces however you choose.

For example, using the filter() function happens pretty often:

val ints = List(1, 2, 3, 4, 5)
val odds = ints.filter { it % 2 == 1 }

Now all you need to test is the condition it % 2 == 1.

What I'm still working on though, is how granular to make this. The code below will yield a 100% tested solution, but is incredibly verbose:

public boolean isOdd(int value) {
    return (value % 2 == 1);
}
   
@Test
public testIsOdd() {
    assertTrue(1);
    assertFalse(2);
}

I now have 6-8 lines of code (depending on how you count) to test 7 characters of code. This is awful. I'm working on how to scale these tests, but I really like the idea of having 100% coverage without complicated mocking.

Wrapping Up

If you haven't watched that presentation now, do it. I'll even give you the link again.

But I hope this explains some of how I'm refactoring the design of MinimalBible. I intend to take full advantage of functional programming for this app, because I think great things can come of it. I'll continue to keep everyone updated on how it's going!

Well, let's get right to it! Development over the last couple of weeks has been slowing down a bit, and I wanted to give a couple of notes why, and why there's nothing to be concerned about.

Kotlin & Functional Programming

Gotta say, I've been a huge fan of Kotlin since discovering it. If I had to make a bet on where the future of Android development was going to take place, this is the language I'd stake it on. While it's not billed as a functional language, it's more than capable of being used that way. I've been doing some significant refactoring of the codebase, and it's so much cleaner not using Java.

But why functional programming? Recently I've been taking a couple online courses at Coursera learning about functional programming. It's seriously confusing at first (no mutable state? no loops?), but it ends up being incredibly powerful and beautiful.

So development has been a bit slow lately - I've been taking that class, and the homework takes a while. But, I'm becoming a better programmer, and that will make development for this app easier. And a whole lot more enjoyable. And now that the class is over

Going forward

So, what will things look like going forward? Here's a couple of goals I'm setting for myself:

Alpha release by November 30th

The core of the alpha release has been done for a while now - I can download the Bible text and display it. I need to get the project roadmap updated but it's there. There are a couple of things left that need to be addressed before I can say I've hit an Alpha point: UPDATE: I'm using Github for tracking issues instead of trying to maintain a roadmap. Find the issue tracker here

  • Jump to text: Currently you can only scroll, the navigation drawer isn't functional.
  • Switch active book: You can download multiple books, but the system selects one automatically at start.
  • Startup time: If at all possible, I want to get the startup time down below 5 seconds. That will be hard, but the work needs to be done.
  • Language filter: Currently all available Bibles are displayed, switch that so only Bibles in a selected language are shown.

After this work is done, I will start giving the app to a couple of friends in order to make sure that it works on multiple devices.

Beta Release by January 31st

The Beta release will seek to be an app that I can actually functionally use on a daily basis. The only feature beyond the alpha that is absolutely critical is search functionality. After this, it will be minimal software for sure, but it will represent the core product of what I want to use!

Following this functionality will be a first release to the Play store. It's crazy to think that I might have an app in the store, but I'm looking forward to it.

Back to Work

So now that the online classes are over, I should be able to get some more development time in. The goals I've set for myself I believe are doable, looking forward to sharing this with the world!

An Alternative Language that works

So, not too long ago I wrote a post on why I wasn't using Groovy for my app. Suffice to say that I wanted to use Groovy, there were a lot of good things that come from it, but the method count made it simply impossible to use. Android applications can only have 65,536 methods in them, and while that sounds like a lot, external libraries vrey quickly take that up. Groovy itself takes ~30,000.

Not too long after writing that post, Mike Gouline wrote a post about another language called Kotlin, calling it the "Swift of Android." I wanted to see if it would actually hold up.

Some quick background though: Kotlin is a language created by the same people behind the IntelliJ IDE. Which is the IDE driving Android Studio. Kotlin's selling points are pretty nice - it plays nicely with Java, is incredibly expressive, and helps a lot with null safety. So, I wanted to take some time and outline the Good, the Bad, and the Ugly, and why I'm using Kotlin going forward.

The Good

So why would I want to use Kotlin? Three reasons:

First:

Lambda functions. Call them closures if you will, but I'm a big fan of how much less code I need to write, and functions being first-class objects. I've had a lot of experience in Python, and there's a lot of really cool things you can do because of this. A quick example using RxJava though:

Java

public static void main(String[] args) {
    List<Integer> list = new ArrayList<Integer>() {1, 2, 3, 4, 5, 6};
    Observable.from(list)
        .filter(new Func1<Integer, Boolean>() {
            @Override
            public Boolean call(Integer value) {
                return value % 2 == 0;
            }
        })
        .forEach(new Action0<Integer>() {
            @Override
            public void call(Integer value) {
                System.out.println(value);
            }
        });
}

Kotlin

public fun main(args: String[]) {
    list = array(1, 2, 3, 4, 5, 6)
    Observable.from(list)
        .filter { it % 2 == 0 }
        .forEach { System.out.println(it) }
}

Second:

I don't have a lot of space, but Extension Functions have already been very useful as well. Check out an example over here.

The basic premise is that you can extend a class by adding extra functions to it that the original Java code didn't have. So, you can remove classes like this:

Java

public class VersificationUtil() {
    public static Versification getVersification(Book b) {
        // Implementation here...
    }
}

public static void main(String[] args) {
    System.out.println(VersificationUtil.getVersification(myBook));
}

And instead write code that looks like this:

Kotlin

public fun Book.getVersification() {
    // Same implementation, but use `this` instead of `b`
}

public fun main(args: String[]) {
    System.out.println(myBook.getVersification())
}

Third:

The method count. Groovy is a fantastic language, and I'd really love to use it. But because it takes up so many methods (~30,000), it's simply impossible.

Kotlin on the other hand takes up ~6000 methods. A fifth of the size for approximately the same functionality that I would actually end up using. I really like that. All of the features, (almost) none of the cost.

The Bad

So, there are a couple frustrating things with Kotlin.

Tests can not be written in Kotlin. All tests being executed by Android must be written in Java still. I haven't tried Robolectric tests (which run on the host computer, not on the Android device), but I imagine the situation is the same. This is certainly not a game changer, but it does make testing things like extension functions a bit painful.

Documentation is still a bit lacking. I was having issues finding what I was looking for in the documentation. For instance, there are no longer any MyClass.class references, but you should use javaClass<MyClass>() (find the section starting with getClass()). While this works (I guess) it just took a long time to find. Most of that is my fault, but having some more examples would have been helpful. Additionally, the community is a bit small yet so just Googling around won't cut it - you need to actually read the documentation.

Final by default. I ran into this issue when I was trying to write tests for code written in Kotlin. In Kotlin, by default, everything is final (see the section labeled Inheritance). This ended up causing issues because Mockito was then unable to mock anything. While I only needed to add the open modifier to my classes and functions being mocked, it took a while to figure out what was going on.

That said, all these things are easily fixed and I can get over pretty quickly.

The Ugly

And this is the reason why I almost didn't use Kotlin at all. For Groovy, the method count meant that I simply was unable to incorporate it into my application.

For Kotlin, the fatal flaw is annotation processing. Because Kotlin is an alternative language that is compiled to Java bytecode, it doesn't make a whole lot of sense for annotation processing tools to run on it. That is, the tool would be generating Java code for a file written in another language. If said tool read from the *.class file, it would work. But Annotation Processing doesn't make any sense from the source code.

This is incredibly problematic if (like me) a lot of your codebase relies on annotation processing. The two most notable subjects are Dagger and Butterknife. Dagger has been an incredibly useful tool to remove a lot of ugly code and make testing possible. Butterknife is more a utility, but still incredibly useful. Neither of these work on Kotlin currently (there is an open ticket to fix this), nor do I believe they ever will.

So, does using Kotlin mean I have to sacrifice my dependency injection system? Simply put: no. In the blog referenced earlier, the author details how you can write the annotated class in Java, and extend it in Kotlin. I think this is a terrible idea, because then you have to always worry about modifying two languages whenever you edit a class, and I imagine Butterknife or Dagger can get a bit confused. So having two files for a single class doesn't make a lot of sense.

However, if dependency injection is the only big problem, we can work around that. Dagger allows you to write @Provides functions that return an object. So, all we need to do is write a provider that instantiates the Kotlin class.

Here's a pure Java example of what I'm talking about:

public class MyClass {
    public String doSomething() {
        // This is the part where you do something
    }
}

public class DependentClass {
    public DependentClass(MyClass mC) {
        this.mC = mC;
    }
}

@Module(injects=DependentClass.class) {
    @Provides MyClass provideClass() {
        return new MyClass();
    }

    @Provides DependentClass provideDependent(MyClass mC) {
        return new DependentClass(mC);
    }
}

So now we've set up a a way to do dependency injection without relying on @Inject annotations in the concrete classes. Moving this to Kotlin is trivial then:

// Kotlin classes
class MyClass() {
    public fun doSomething(): String {
        // This is the part where you do something
    }
}

class DependentClass(mC: MyClass) {
    val mC = mC
}
// This next section is the same @Module from above
@Module(injects=DependentClass.class) {
    @Provides MyClass provideClass() {
        return new MyClass();
    }

    @Provides DependentClass provideDependent(MyClass mC) {
        return new DependentClass(mC);
    }
}

The end result: Kotlin can participate in Dependency Injection without having to worry about the framework.

The caveat: Because Kotlin can't participate in the framework, that means that you need to write some extra code in your @Module classes to do the manual injection. However, it's faster for me to write the @Module code than it is to write the Java class.

The Summary

So far, despite the "fatal flaw" named above, I'm happy with Kotlin. I plan on using it for all classes that don't need to actively participate in any of the code generation frameworks. So while Fragments and Activitys and the like will still need to be Java, I can write the bulk of the code in Kotlin to make it nice and quick.