Generate and track metrics for Flask API applications using Prometheus and Grafana

The code for this entire implementation can be found here: https://github.com/ramdesh/flask-prometheus-grafana-example

Flask is a very popular lightweight framework for writing web and web service applications in Python. In this blog post, I’m going to talk about how to monitor metrics on a Flask RESTful web service API application using Prometheus and Grafana. We’ll be tying it all together using docker-compose so that we can run everything using a single command, in an isolated Docker network.

Prometheus is a time-series database, that is extremely popular as a metrics and monitoring database, specially with Kubernetes. Promtheus is really cool because it is designed to scrape metrics from your application, instead of your application having to send metrics to it actively. Coupled with Grafana, this stack turns in to a powerful metrics tracking/monitoring tool, which is used in applications the world over.

To couple Flask with Prometheus and Grafana, we’re going to use the invaluable prometheus_flask_exporter library. This library allows us to create a /metrics endpoint for Prometheus to scrape with useful metrics regarding endpoint access, such as time taken to generate each response, CPU metrics, and so on.

The first thing we need to do in order to set up, is to create our Flask app. Here’s a really simple server.py with the exporter library included:

import logging

from flask import Flask
from flask import jsonify
from prometheus_flask_exporter import PrometheusMetrics

logging.basicConfig(level=logging.INFO)
logging.info("Setting LOGLEVEL to INFO")

api = Flask(__name__)
metrics = PrometheusMetrics(api)

metrics.info("app_info", "App Info, this can be anything you want", version="1.0.0")


@api.route("/flask-prometheus-grafana-example/")
def hello():
    return jsonify(say_hello())


def say_hello():
    return {"message": "hello"}

This code just returns a “hello” message when you access the flask-prometheus-grafana-example endpoint. The important part here is the integration of the prometheus_flask_exporter library. All you have to do is initialize a metrics object using metrics = PrometheusMetrics(yourappname) to get it working. It will automatically start exporting metrics to the /metrics endpoint of your application for the specified endpoint after that. If you go your app’s /metrics endpoint after running it, you’ll be greeted with something like this:

Now to set up Prometheus and Grafana. For Prometheus, you need a prometheus.yml file, which would look something like this:

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    static_configs:
    - targets: ['example-prometheus:9090']

  - job_name: 'flask-api'
    scrape_interval: 5s
    static_configs:
    - targets: ['flask-api:5000']

In this example, we see that Prometheus is watching two endpoints, itself, example-prometheus:9090, and the Flask api, flask-api. These names are arbitrarily set inside the docker-compose config file, which we will get to later.

For Grafana, we need a datasource.yml file;

# config file version
apiVersion: 1

# list of datasources that should be deleted from the database
deleteDatasources:
  - name: Prometheus
    orgId: 1

# list of datasources to insert/update depending
# whats available in the database
datasources:
  # <string, required> name of the datasource. Required
- name: Prometheus
  # <string, required> datasource type. Required
  type: prometheus
  # <string, required> access mode. direct or proxy. Required
  access: proxy
  # <int> org id. will default to orgId 1 if not specified
  orgId: 1
  # <string> url
  url: http://example-prometheus:9090
  # <string> database password, if used
  password:
  # <string> database user, if used
  user:
  # <string> database name, if used
  database:
  # <bool> enable/disable basic auth
  basicAuth: false
  # <string> basic auth username, if used
  basicAuthUser:
  # <string> basic auth password, if used
  basicAuthPassword:
  # <bool> enable/disable with credentials headers
  withCredentials:
  # <bool> mark as default datasource. Max one per org
  isDefault: true
  # <map> fields that will be converted to json and stored in json_data
  jsonData:
     graphiteVersion: "1.1"
     tlsAuth: false
     tlsAuthWithCACert: false
  # <string> json object of data that will be encrypted.
  secureJsonData:
    tlsCACert: "..."
    tlsClientCert: "..."
    tlsClientKey: "..."
  version: 1
  # <bool> allow users to edit datasources from the UI.
  editable: true

In this file, we are defining datasources.url, which is also derived from the name of the prometheus container on the Docker network via the docker-compose file.

Finally, we have a config.monitoring file:

GF_SECURITY_ADMIN_PASSWORD=pass@123
GF_USERS_ALLOW_SIGN_UP=false

This basically means we’ll be logging in to our Grafana dashboard using username: admin and password: pass@123.

Next, we’re going to load this all up using a Docker Compose file:

version: "3.5"

services:
  flask-api:
    build:
      context: ./api
    restart: unless-stopped
    container_name: flask-api
    image: example-flask-api
    ports:
      - "5000:5000"
    networks:
      example-network:
        ipv4_address: 172.16.238.10

  example-prometheus:
    image: prom/prometheus:latest
    restart: unless-stopped
    container_name: example-prometheus
    ports:
      - 9090:9090
    volumes:
      - ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
    networks:
      example-network:
        ipv4_address: 172.16.238.11

  example-grafana:
    image: grafana/grafana:latest
    restart: unless-stopped
    user: "472"
    container_name: example-grafana
    depends_on:
      - example-prometheus
    ports:
      - 3000:3000
    volumes:
      - ./monitoring/datasource.yml:/etc/grafana/provisioning/datasource.yml
    env_file:
      - ./monitoring/config.monitoring
    networks:
      example-network:
        ipv4_address: 172.16.238.12

networks:
  example-network:
    name: example-network
    driver: bridge
    ipam:
      driver: default
      config:
        - subnet: 172.16.238.0/24

Note that we’re creating our own Docker network and putting all our applications on it, which allows them to talk to each other. However, this would be the same if you didn’t specify a network at all. Also important to note is that am using wsgi to run the Flask application.

Once Grafana is up, you should be able to log in and configure Prometheus as a datasource:

Add Prometheus as a datasource for Grafana

Once that’s done, you can use the example dashboard from the creator of the prometheus_flask_exporter library (use Import->JSON) which can be found here: https://github.com/rycus86/prometheus_flask_exporter/blob/master/examples/sample-signals/grafana/dashboards/example.json

This gives you a cool dashboard like this:

Grafana dashboard for Flask metrics

As you can see, this gives us a killer implementation of Prometheus + Grafana to monitor a Flask web service application with minimum effort.

I wish everything in software development was this easy.

Using a Redis feeder with Gatling

Gatling is a popular open source load testing tool that allows you to write load tests as code. It is mainly based on Scala which means you get more out of it by writing less code, and allows a great deal of flexibility in terms of how you design your load test scenarios. It can be used to send millions of requests to an application within a small amount of time, emulating different users working with different use cases.

In some cases, it is more practical to send actual data using Gatling without having to randomly generate test data. When sending millions of actual records, Gatling provides the capability of integrating one of many different data feeders, which can be wired up to provide Gatling with a constant stream of data to send to your application.

I was presented with such a need of sending millions of records pulled out of a PostgreSQL database (after some transformation) using Gatling to a set of Lamba-based web services hosted on AWS as part of a load test. Since there was a transformation step and there would be the need of using the same dataset for different scenarios on multiple test runs, I decided to store the data items in Redis after transformation for speed and quick access via Gatling (as Gatling would be running on the same box). 

A quick docker-compose setup for Redis and I had my database:

---
version: "3.7"
services:
  redis:
    image: redis:latest
    container_name: data_cache
    volumes:
      - '/etc/redis/data/:/data/'
    ports:
      - 6379:6379
    expose:
      - 6379

Now we can move on to initializing and using the Redis feeder within our Simulation class in Gatling. The following imports are needed in the Scala class:

import com.redis._
import io.gatling.redis.Predef._
import io.gatling.redis.feeder.RedisFeederBuilder

Assuming that the items that need to be sent to the application by Gatling are stored in a list on Redis, the feeder can be initialized and integrated as follows:

val dataFeederPool = new RedisClientPool("localhost", 6379)
val myDataFeeder: RedisFeederBuilder = 
  redisFeeder(dataFeederPool, "mydatalist").LPOP

Here we have used the Redis LPOP function, the Gatling Redis feeder provides the capability to use SPOP and SRANDMEMBER as well. (LPOP means items will be popped out of a list in Redis).

Integrating the feeder into a scenario is just as simple:

val myScn: ScenarioBuilder = scenario("Post an item")
  .feed(myDataFeeder)
  .exec(myHttpRequest)

After integrating the feeder, you can implement whatever checks, extra parameters or conditional statements that you need.

GSoC 2019 is here!

I’ve always been a huge fan of the Google Summer of Code program, having been involved with it in many capacities over the years.

The 2019 iteration of GSoC has been formally announced and the student application period starts soon.

Here’s the official word from Google:

Google Summer of Code (GSoC) will begin accepting student applications on March 25, 2019. The global, online program is designed to engage university students with open source software communities.  

GSoC is a highly competitive and unique program – in the past 14 years we have had over 14,000 students from 109 countries accepted into the program. Participants are paired with a mentor to help introduce them to an open source community and guide them with their project. GSoC gives students invaluable real world experience and an opportunity to receive a stipend to contribute to open source software.

For the 2019 program 206 open source organizations have been accepted into the program. Now is a great time for students to check out project ideas and reach out to the organizations to learn more!

We would appreciate your help to spread the word about Google Summer of Code to local university students. Check out the resources below:

  • Website: g.co/gsoc
  • Student application window: March 25 – April 9, 2019
  • Flyers, Slide Decks, Logos: https://developers.google.com/open-source/gsoc/resources/media
  • Project Topics Include: Cloud, Operating Systems, Machine Learning, Graphics, Medicine, Programming Languages, Robotics, Physics, Science, Security and many more.
  • Sample Twitter: University students receive mentorship and a stipend to work on open source software through Google Summer of Code. Applications are open March 25 – April 9! g.co/gsoc

Details of the program are available on the program site and be sure to check out the Advice for Students for quick tips and the Student Guide for more details on the whole program.

Please contact us at gsoc-support@google.com if you have any questions.

Commit notifications for Amazon CodeCommit using a Lambda Function and a Telegram Bot

Telegram is one of my favorite chat applications – it provides security, super speed, and a myriad of other features that you don’t find in almost any other chat service. Another such service is their Bot framework, which allows the creation of chat bots for Telegram that can do so many different things. Some of these are bots for services such as Gitlab and Bitbucket, which work off of webhooks to send commit details for repositories to a given chat on Telegram.

Amazon Web Services’ CodeCommit is a managed version control service provided by AWS, which can be configured to use Git as its underlying platform. When using CodeCommit, I wanted to send notifications on new commits to a Telegram group similarly to what the aforementioned bots for Gitlab and Bitbucket do. So I set out to achieve this using triggers from CodeCommit, a Lambda function and a very simple Telegram bot.

On a push to a given repository, CodeCommit will invoke a trigger which calls an AWS Lambda function, which in turn runs a RESTful web service call to the Telegram Bot API

As seen above, I created a Lambda function which is triggered by specific events on the CodeCommit repository, which contains the code needed to send a RESTful GET request to the Telegram Bot API to send a message to a specific group using the Telegram Bot.

So, let’s get down to brass tacks:

  1. Create the Telegram Bot: All you need to do is chat with @BotFather on Telegram. This is a Telegram Bot that helps you create new bots. Through some simple commands, you can get a new bot created for yourself and receive an auth token generated as well:

    Chat with BotFather to get your bot created

    NOTE: While @BotFather insists that it may come back some day to me with a request of its own, it has not done so yet, so I’m hoping I won’t be finding any severed horse heads on my bed anytime soon.

  2. Assuming you already have an AWS CodeCommit repository, create your Lambda function: You can do this the other way around as well, but I prefer to create the Lambda function and then bind the CodeCommit repository to it, as AWS lets you do this very easily. You can specify which repository you want to work with, what to name the trigger etc on the create trigger workflow on the Lambda function:

    Drag and drop a CodeCommit trigger from the left

    Then choose your repository, name your trigger etc below

    Once you’ve configured this stuff, you can go ahead and…

  3. Code your Lambda function: Here’s the sample code:
    var http = require('https');
    var AWS = require('aws-sdk');
    
    exports.handler = (event, context) => { 
        var codecommit = new AWS.CodeCommit({ apiVersion: '2015-04-13' });
        
        // Build Telegram Bot URL
        var baseUrl = "https://api.telegram.org/MY_BOT_TOKEN/sendMessage?chat_id=MY_CHAT_ID&text=";
        
        // Get the commit ID from the event
        var commits = event.Records[0].codecommit.references.map(
            function(reference) {
                return reference.commit;
            });
        console.log('CommitId:', commits);
        
        // Get the repository from the event and use it to get details of the commit
        var repository = event.Records[0].eventSourceARN.split(":")[5];
        codecommit.getCommit({
            commitId: commits[0],
            repositoryName: repository
        }, function(err, data) {
            if(err) {
                context.fail(err);
            } else {
                console.log(data);
                var commit = data.commit;
                var commitDetails = 'New commit to my repo: \nRef: ' + event.Records[0].codecommit.references[0].ref 
                    + '\n' + 'Message: ' + commit.message + 'Author: ' + commit.author.name + ' <' + commit.author.email + '>';
                var url = baseUrl + commitDetails;
                http.get(url, function(res) {
                    console.log("Success");
                    context.succeed();
                  }).on('error', function(e) {
                    console.log("Failed");
                    context.fail();
                  });
            }
        });
    };
    

    I wanted to send my messages to a specific Telegram group so I used @RawDataBot to get the group ID of that group, which is basically the group where the team members who work on this particular code repository are. @RawDataBot will give you a massive JSON string as soon as it joins the group, in which the chat ID will be included.

    In the code above, I’ve used the aws-sdk npm package to extract the CodeCommit JS API, which can be used to extract things such as commit details from the minimal information that is provided to you by the CodeCommit trigger. Actually, all you can get out of the CodeCommit trigger (the event object that is passed to the Lambda function) is the Ref of the repository that the commit occurred on and the commit ID.

    Then some mediocre object manipulation later, I’m making a very simple http call to the Telegram Bot API endpoint, which in turn invokes my bot to send a message to the group I have specified. In the end, it looks something like this:

    There’s a lot going on here. We have the bot sending messages about commits as well as builds (through a CodePipeline trigger, not covered in this post), and there’s also some gloating from me on some other conversation we were having on the group 😀

     

    And well, that’s how it’s done.

A boilerplate project for NodeJS + ExpressJS on ECMAScript 6 with MongoDB

With the advent of JavaScript ECMAScript 6/ECMAScript 2016/ES6, a whole bunch of new features were introduced, most of them being game-changers for anyone who wanted to switch over. My colleague Ragavan and I took it on ourselves to convert one of our existing ES5 NodeJS projects to ES6, and I thought it would be good to put together a base project that anyone could use to bootstrap a typical NodeJS + ExpressJS + MongoDB + REST project using the tools that we used.

We have used the following tools to make this work:

The full code can be found on Github. A huge shout out to Ashantha Lahiru who worked really hard to make the code presentable and more generic, as well as the MongoDB integration.

 

Troubleshooting Blazemeter incompatibility issues

Blazemeter is an awesome wrapper for JMeter that allows you to run JMeter load tests from various locations and generate awesome reports.

One of the coolest features of Blazemeter is that it allows you to upload existing JMeter Test Suites in the form of JMX files to execute those tests with any load that you specify.

However, I found that startup itself was failing on Blazemeter when I uploaded a JMX file generated by JMeter 3.1. Upon consulting the helpful support team, I was told that their compatibility with JMeter 3.1 is experimental, and I should remove a bunch of listeners I had added for reporting from the JMeter file.

This solved the problem, and I was able to run my tests.

Running Cloudant Queries through the Cloudant NPM Package

Cloudant is a recently-popular solution for cloud-based NoSQL databases. It is based heavily on CouchDB, and provides a very easy-to-use HTTP verb-based web service interface to carry out database operations.

When using Cloudant with Node.js or Express.js, the Cloudant NPM Package, which is basically a wrapper for the CouchDB NPM package known as nano, comes in handy. But while their documentation states how to execute various operations such as getting a document by its ID, doing bulk operations etc, it is quite obscure on how to execute the extremely useful operations based on Cloudant Query, which allows you to write complex selectors like the following:

 

{
  "selector": {
    "_id": "myid",
    "$or" : [
        {
          "$and" : [
            {
                "endDate" : {"$gt" : "2015-11-05"}
            },
            {
                "endDate" : {"$lte" : "2015-11-30"}
            }
          ]
        },
        {
          "$and" : [
              {
                  "startDate" : {"$gte" : "2015-11-05"}
              },
              {
                  "startDate" : {"$lt" : "2015-11-30"}
              }
          ]
        }
    ],
    "usertype":"admin"
  }
}

So how do you execute a query like that through the provided functions in the npm package for Cloudant? The secret lies in the find function provided in the package. The above could be executed as;

 


db.find({"selector": {
    "_id": "myid",
    "$or" : [
        {
          "$and" : [
            {
                "endDate" : {"$gt" : "2015-11-05"}
            },
            {
                "endDate" : {"$lte" : "2015-11-30"}
            }
          ]
        },
        {
          "$and" : [
              {
                  "startDate" : {"$gte" : "2015-11-05"}
              },
              {
                  "startDate" : {"$lt" : "2015-11-30"}
              }
          ]
        }
    ],
    "usertype":"admin"
  }
}, function(error, result) {});

And simple as that, you can execute any complex query that works on Cloudant Query using the Cloudant npm package.

Running PHP applications in Apache Tomcat 7

Apache Tomcat 7 is a popular server for running Java applications – but say you have a unique instance where you want to run both PHP and Java applications on Tomcat but don’t want to run two separate servers (i.e. Apache HTTPD + Tomcat) to achieve it?

PLEASE NOTE THAT THIS METHOD DOES NOT WORK ON TOMCAT 8 AS OF THE DATE OF WRITING.

For this we’re going to use the PHP – Java Bridge for Tomcat, which is a handy little package that will sort out PHP execution on top of Tomcat.

The first step is to download the Javabridge template from this link at SourceForge. This will give you a WAR file that you can directly drop in to your Tomcat server’s webapps folder.

For Windows users, the next step is to download PHP and extract it somewhere:

php

 

For Linux users, just install php5 using apt-get, yum, or your preferred package management tool.

Next step is to add PHP to the PATH variable. Append the path to your PHP folder to the end of your respective PATH variable.

Next, drop the JavaBridgeTemplate61.war file in to your Tomcat server’s webapps folder and start Tomcat.

When you navigate to your server’s Java Bridge folder (this is http://localhost:8080/JavaBridgeTemplate61/ by default), you’ll be able to see the Java Bridge’s index page:

Javabridge welcome page

 

If you go to javabridge/test.php, you’ll be able to see the output from the phpinfo() function:

phpinfo

 

If you open up test.php in the JavaBridgeTemplate folder in your Tomcat’s webapp folder, you can edit some of the code:

test

hello

 

 

Now you can deploy any PHP application within the JavaBridgeTemplate folder and rename the folder to whatever you want. You can add many folders like this for different PHP applications.

Note that the JavaBridge runs PHP as CGI. If you want to enable/disable specific PHP extensions you can simply enable them as you would do normally on your php.ini file.

Behat testing PHP applications with Moodle as an example

The Behat testing framework  advocates the relatively-new concept of Behavior-Driven Development (BDD)/Acceptance-Test Driven Development (ATDD), where human-readable tests are written for highly user-oriented tasks by the developers themselves. This automates the User Acceptance Testing (UAT) process to a certain degree as the tests themselves are not written in highly-technical terminology and follow front-end testing paradigms. Essentially, the idea is that you define a test on how your application should work from the front-end, and then develop that feature from there. Behat works on PHP 5.3 and upwards.

Moodle is a Free and Open Source Learning Management Tool with an active community from all around the world. Moodle is also a good example for extensive use of Behat to test its features. (I should note that Sahana makes use of the Robot framework for ATDD as well)

I thought of covering some of the basics in the Behat framework using examples from how Moodle does it. While Moodle’s implementation might be somewhat different from other projects that use Behat, the basics of each implementation should be reasonably similar.

The following is an example for a typical Behat test, which Behat calls a feature file.

@auth
 Feature: Login
   In order to login
   As a moodle user
   I need to be able to validate the username and password against moodle

   Scenario: Login as an existing user
     Given I am on "login/index.php"
     When I fill in "username" with "admin"
     And I fill in "password" with "admin"
     And I press "loginbtn"
     Then I should see "Moodle 101: Course Name"

   Scenario: Login as a non-existent user
     Given I am on "login/index.php"
     When I fill in "username" with "admin"
     And I fill in "password" with "admin"
     And I press "loginbtn"
     Then I should see "Moodle 101: Course Name"

Notice that the feature is written in Gherkin,  which is a feature language taken from Cucumber, the ATDD framework for Ruby on Rails. These Gherkin commands are tied in to a PHP function on the Behat framework, for example,

And I press "loginbtn"

ties in to;

/**
 * Presses button with specified id|name|title|alt|value.
 *
 * @When /^I press "(?P&lt;button_string&gt;(?:[^"]|\\")*)"$/
 * @throws ElementNotFoundException Thrown by behat_base::find
 * @param string $button
 */
public function press_button($button) {

    // Ensures the button is present.
    $buttonnode = $this-&gt;find_button($button);
    $buttonnode-&gt;press();
}

in behat_forms.php in the Behat framework. It is also possible to re-use these functions to write very specific test functions for your application. For example, Moodle contains an ‘Editing mode’ on the Courses module which allows course administrators and teachers to see edit links on various nodes within a course.  Moodle’s implementation of Behat provides

And I turn editing mode on

which translates to

/**
 * Turns editing mode on.
 * @Given /^I turn editing mode on$/
 */
 public function i_turn_editing_mode_on() {
   return new Given('I press "' . get_string('turneditingon') . '"');
 }

which is implemented in the behat_course.php custom class within Moodle’s course core module. Note that get_string($string) is a function specific to Moodle’s framework.

An interesting feature of Behat is its integration with Mink and Selenium Webdriver to execute the tests on UI itself. Once you run the tests, Selenium will open up a browser window and execute the steps described in each feature file sequentially, while you can watch it being executed.

The Moodle community provides a very comprehensive guide on how to get started with Behat on Moodle, so I will not reproduce that here. I will get on to describing more of the finer points of Behat testing later on.

Adding the AndEngine Box2D Physics Extension to an AndEngine project

I’ve talked about AndEngine before as a great framework for building 2D games on the Android platform. Today, I will be explaining how to include the AndEngine Box2D extension within an AndEngine project so that you can start implementing realistic physics within your game.

The AndEnginePhysicsBox2DExtension, provides an implementation of the popular C++ Box2D physics framework for AndEngine. Getting this doohicky integrated with your current AndEngine project can be a little tricky, however, and that’s what I’m going to explain today.

First, you need to have an Eclipse project with AndEngine included in any way you prefer. Note that you’ll have to have AndEngine as a library project in Eclipse before you begin, as the Box2D Extension will need it as well.

The next step is to clone the Github repository for the AndEnginePhysicsBox2DExtension.

Git Clone the Box2D Extension
Git Clone the Box2D Extension

Now, import the extension as a Library project to your Eclipse workspace. (Right click on workspace -> Import -> Android -> Android Project from Existing Code).

Import the extension as existing Android code to your workspace
Import the extension as existing Android code to your workspace

Once you’ve imported the extension, it will immediately build a set of JARs and various other binary files. Now, there are two ways to include the extension in to an existing AndEngine project:

  1. Include it as a library project (recommended) :
    1. Make sure your AndEnginePhysicsBox2DExtension workspace project is pegged as a library. You should be be able to check this by right clicking on the extension project, going to properties, and checking whether ‘Is Library’ is ticked under the Android section.

      Make sure that your extension project is a library
      Make sure that your extension project is a library

      Now, right click on your existing AndEngine project and go to Properties and go to the Android Section. In the libraries box at the bottom, you should be able to add the AndEnginePhysicsBox2DExtension as a library.

      Add the extension as a library to your AndEngine project
      Add the extension as a library to your AndEngine project
  2. Add AndEnginePhysicsBox2DExtension binaries to your libs folder:
    1. As I mentioned earlier, your AndEnginePhysicsBox2DExtension workspace project will build a set of extension binaries in the /bin folder as soon as you import it. Although it is not recommended by the developer of AndEngine, you can copy over these files to your AndEngine project’s /libs folder.

      Copy Physics Extension JAR files to your /libs folder
      Copy Physics Extension JAR files to your /libs folder

       

Now that you’ve included the AndEnginePhysicsBox2DExtension within your AndEngine project, you can proceed to write awesome physics code in to your game, like including the gravity of the first Death Star within the game’s environment!

Use the gravity of the first Death Star in your game! This is actually an Android thing, not an AndEngine thing.
Use the gravity of the first Death Star in your game! This is actually an Android thing, not an AndEngine thing.