Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
Heiko Eißfeldt committed Jun 12, 2019
2 parents 1c2ed83 + 7a236b1 commit 0113c4f
Show file tree
Hide file tree
Showing 16 changed files with 244 additions and 259 deletions.
2 changes: 1 addition & 1 deletion afl-as.c
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,7 @@ static void add_instrumentation(void) {
}

/* Label of some sort. This may be a branch destination, but we need to
tread carefully and account for several different formatting
read carefully and account for several different formatting
conventions. */

#ifdef __APPLE__
Expand Down
6 changes: 3 additions & 3 deletions afl-fuzz.c
Original file line number Diff line number Diff line change
Expand Up @@ -4066,7 +4066,7 @@ static void show_stats(void) {

/* Lord, forgive me this. */

SAYF(SET_G1 bSTG bLT bH bSTOP cCYA " process timing " bSTG bH30 bH5 bH2 bHB
SAYF(SET_G1 bSTG bLT bH bSTOP cCYA " process timing " bSTG bH30 bH5 bH bHB
bH bSTOP cCYA " overall results " bSTG bH2 bH2 bRT "\n");

if (dumb_mode) {
Expand Down Expand Up @@ -4833,7 +4833,7 @@ static u32 calculate_score(struct queue_entry* q) {
break;

default:
PFATAL ("Unkown Power Schedule");
PFATAL ("Unknown Power Schedule");
}
if (factor > MAX_FACTOR)
factor = MAX_FACTOR;
Expand Down Expand Up @@ -8085,7 +8085,7 @@ int main(int argc, char** argv) {
case LIN: OKF ("Using linear power schedule (LIN)"); break;
case QUAD: OKF ("Using quadratic power schedule (QUAD)"); break;
case EXPLORE: OKF ("Using exploration-based constant power schedule (EXPLORE)"); break;
default : FATAL ("Unkown power schedule"); break;
default : FATAL ("Unknown power schedule"); break;
}

if (getenv("AFL_NO_FORKSRV")) no_forkserver = 1;
Expand Down
2 changes: 1 addition & 1 deletion config.h
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@

/* Version string: */

#define VERSION "++2.52c"
#define VERSION "++2.52d"

/******************************************************
* *
Expand Down
10 changes: 9 additions & 1 deletion docs/ChangeLog
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,15 @@ sending a mail to <[email protected]>.


-----------------------------
Version ++2.52c (2019-05-28):
Version ++2.52d (tbd):
-----------------------------

- ... your idea or patch?



-----------------------------
Version ++2.52c (2019-06-05):
-----------------------------

- Applied community patches. See docs/PATCHES for the full list.
Expand Down
18 changes: 4 additions & 14 deletions docs/README
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ american fuzzy lop plus plus
Released under terms and conditions of Apache License, Version 2.0.

For new versions and additional information, check out:
http://lcamtuf.coredump.cx/afl/
https://github.com/vanhauser-thc/AFLplusplus

To compare notes with other users or get notified about major new features,
send a mail to <[email protected]>.
Expand Down Expand Up @@ -513,21 +513,11 @@ Thank you!
15) Contact
-----------

Questions? Concerns? Bug reports? The author can be usually reached at
<lcamtuf@google.com>.
Questions? Concerns? Bug reports? The contributors can be reached via
https://github.com/vanhauser-thc/AFLplusplus

There is also a mailing list for the project; to join, send a mail to
There is also a mailing list for the afl project; to join, send a mail to
<[email protected]>. Or, if you prefer to browse
archives first, try:

https://groups.google.com/group/afl-users

PS. If you wish to submit raw code to be incorporated into the project, please
be aware that the copyright on most of AFL is claimed by Google. While you do
retain copyright on your contributions, they do ask people to agree to a simple
CLA first:

https://cla.developers.google.com/clas

Sorry about the hassle. Of course, no CLA is required for feature requests or
bug reports.
2 changes: 1 addition & 1 deletion docs/perf_tips.txt
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ There are several OS-level factors that may affect fuzzing speed:
- Use the afl-system-config script to set all proc/sys settings above

- Disable all the spectre, meltdown etc. security countermeasures in the
kernel if your machine is properly seperated:
kernel if your machine is properly separated:
"ibpb=off ibrs=off kpti=off l1tf=off mds=off mitigations=off
no_stf_barrier noibpb noibrs nopcid nopti nospec_store_bypass_disable
nospectre_v1 nospectre_v2 pcid=off pti=off spec_store_bypass_disable=off
Expand Down
2 changes: 1 addition & 1 deletion docs/power_schedules.txt
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ where *α(i)* is the performance score that AFL uses to compute for the seed inp

More details can be found in our paper that was recently accepted at the [23rd ACM Conference on Computer and Communications Security (CCS'16)](https://www.sigsac.org/ccs/CCS2016/accepted-papers/).

PS: In parallel mode (several instances with shared queue), we suggest to run the master using the exploit schedule (-p exploit) and the slaves with a combination of cut-off-exponential (-p coe), exponential (-p fast; default), and explore (-p explore) schedules. In single mode, the default settings will do. **EDIT:** In parallel mode, AFLFast seems to perform poorly because the path probability estimates are incorrect for the imported seeds. Pull requests to fix this issue by syncing the estimates accross instances are appreciated :)
PS: In parallel mode (several instances with shared queue), we suggest to run the master using the exploit schedule (-p exploit) and the slaves with a combination of cut-off-exponential (-p coe), exponential (-p fast; default), and explore (-p explore) schedules. In single mode, the default settings will do. **EDIT:** In parallel mode, AFLFast seems to perform poorly because the path probability estimates are incorrect for the imported seeds. Pull requests to fix this issue by syncing the estimates across instances are appreciated :)

Copyright 2013, 2014, 2015, 2016 Google Inc. All rights reserved.
Released under terms and conditions of Apache License, Version 2.0.
8 changes: 4 additions & 4 deletions llvm_mode/split-compares-pass.so.cc
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ bool SplitComparesTransform::simplifySignedness(Module &M) {
Instruction *icmp_inv_sig_cmp;
BasicBlock* sign_bb = BasicBlock::Create(C, "sign", end_bb->getParent(), end_bb);
if (pred == CmpInst::ICMP_SGT) {
/* if we check for > and the op0 positiv and op1 negative then the final
/* if we check for > and the op0 positive and op1 negative then the final
* result is true. if op0 negative and op1 pos, the cmp must result
* in false
*/
Expand Down Expand Up @@ -369,7 +369,7 @@ bool SplitComparesTransform::splitCompares(Module &M, unsigned bitw) {

BasicBlock* end_bb = bb->splitBasicBlock(BasicBlock::iterator(IcmpInst));

/* create the comparison of the top halfs of the original operands */
/* create the comparison of the top halves of the original operands */
Instruction *s_op0, *op0_high, *s_op1, *op1_high, *icmp_high;

s_op0 = BinaryOperator::Create(Instruction::LShr, op0, ConstantInt::get(OldIntType, bitw / 2));
Expand Down Expand Up @@ -403,7 +403,7 @@ bool SplitComparesTransform::splitCompares(Module &M, unsigned bitw) {
cmp_low_bb->getInstList().push_back(icmp_low);
BranchInst::Create(end_bb, cmp_low_bb);

/* dependant on the cmp of the high parts go to the end or go on with
/* dependent on the cmp of the high parts go to the end or go on with
* the comparison */
auto term = bb->getTerminator();
if (pred == CmpInst::ICMP_EQ) {
Expand Down Expand Up @@ -448,7 +448,7 @@ bool SplitComparesTransform::splitCompares(Module &M, unsigned bitw) {
term->eraseFromParent();
BranchInst::Create(end_bb, inv_cmp_bb, icmp_high, bb);

/* create a bb which handles the cmp of the lower halfs */
/* create a bb which handles the cmp of the lower halves */
BasicBlock* cmp_low_bb = BasicBlock::Create(C, "injected", end_bb->getParent(), end_bb);
op0_low = new TruncInst(op0, NewIntType);
cmp_low_bb->getInstList().push_back(op0_low);
Expand Down
4 changes: 2 additions & 2 deletions llvm_mode/split-switches-pass.so.cc
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ BasicBlock* SplitSwitchesTransform::switchConvert(CaseVector Cases, std::vector<
}
PHINode *PN = cast<PHINode>(I);

/* Only update the first occurence. */
/* Only update the first occurrence. */
unsigned Idx = 0, E = PN->getNumIncomingValues();
for (; Idx != E; ++Idx) {
if (PN->getIncomingBlock(Idx) == OrigBlock) {
Expand Down Expand Up @@ -278,7 +278,7 @@ bool SplitSwitchesTransform::splitSwitches(Module &M) {
}
PHINode *PN = cast<PHINode>(I);

/* Only update the first occurence. */
/* Only update the first occurrence. */
unsigned Idx = 0, E = PN->getNumIncomingValues();
for (; Idx != E; ++Idx) {
if (PN->getIncomingBlock(Idx) == OrigBlock) {
Expand Down
1 change: 0 additions & 1 deletion qemu_mode/build_qemu_support.sh
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,6 @@ patch -p1 <../patches/cpu-exec.diff || exit 1
patch -p1 <../patches/syscall.diff || exit 1
patch -p1 <../patches/translate-all.diff || exit 1
patch -p1 <../patches/tcg.diff || exit 1
patch -p1 <../patches/elfload2.diff || exit 1

echo "[+] Patching done."

Expand Down
4 changes: 3 additions & 1 deletion qemu_mode/patches/afl-qemu-cpu-inl.h
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@
TCG instrumentation and block chaining support by Andrea Biondo
<[email protected]>
QEMU 3.1.0 port and thread-safety by Andrea Fioraldi
<[email protected]>
Copyright 2015, 2016, 2017 Google Inc. All rights reserved.
Expand All @@ -19,7 +21,7 @@
http://www.apache.org/licenses/LICENSE-2.0
This code is a shim patched into the separately-distributed source
code of QEMU 2.10.0. It leverages the built-in QEMU tracing functionality
code of QEMU 3.1.0. It leverages the built-in QEMU tracing functionality
to implement AFL-style instrumentation and to take care of the remaining
parts of the AFL fork server logic.
Expand Down
165 changes: 165 additions & 0 deletions qemu_mode/patches/afl-qemu-tcg-inl.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
/*
american fuzzy lop - high-performance binary-only instrumentation
-----------------------------------------------------------------
Written by Andrew Griffiths <[email protected]> and
Michal Zalewski <[email protected]>
Idea & design very much by Andrew Griffiths.
TCG instrumentation and block chaining support by Andrea Biondo
<[email protected]>
QEMU 3.1.0 port and thread-safety by Andrea Fioraldi
<[email protected]>
Copyright 2015, 2016, 2017 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at:
http://www.apache.org/licenses/LICENSE-2.0
This code is a shim patched into the separately-distributed source
code of QEMU 3.1.0. It leverages the built-in QEMU tracing functionality
to implement AFL-style instrumentation and to take care of the remaining
parts of the AFL fork server logic.
The resulting QEMU binary is essentially a standalone instrumentation
tool; for an example of how to leverage it for other purposes, you can
have a look at afl-showmap.c.
*/

void afl_maybe_log(void* cur_loc);

/* Note: we convert the 64 bit args to 32 bit and do some alignment
and endian swap. Maybe it would be better to do the alignment
and endian swap in tcg_reg_alloc_call(). */
void tcg_gen_afl_maybe_log_call(target_ulong cur_loc)
{
int real_args, pi;
unsigned sizemask, flags;
TCGOp *op;

TCGTemp *arg = tcgv_ptr_temp( tcg_const_tl(cur_loc) );

flags = 0;
sizemask = dh_sizemask(void, 0) | dh_sizemask(ptr, 1);

#if defined(__sparc__) && !defined(__arch64__) \
&& !defined(CONFIG_TCG_INTERPRETER)
/* We have 64-bit values in one register, but need to pass as two
separate parameters. Split them. */
int orig_sizemask = sizemask;
TCGv_i64 retl, reth;
TCGTemp *split_args[MAX_OPC_PARAM];

retl = NULL;
reth = NULL;
if (sizemask != 0) {
real_args = 0;
int is_64bit = sizemask & (1 << 2);
if (is_64bit) {
TCGv_i64 orig = temp_tcgv_i64(arg);
TCGv_i32 h = tcg_temp_new_i32();
TCGv_i32 l = tcg_temp_new_i32();
tcg_gen_extr_i64_i32(l, h, orig);
split_args[real_args++] = tcgv_i32_temp(h);
split_args[real_args++] = tcgv_i32_temp(l);
} else {
split_args[real_args++] = arg;
}
nargs = real_args;
args = split_args;
sizemask = 0;
}
#elif defined(TCG_TARGET_EXTEND_ARGS) && TCG_TARGET_REG_BITS == 64
int is_64bit = sizemask & (1 << 2);
int is_signed = sizemask & (2 << 2);
if (!is_64bit) {
TCGv_i64 temp = tcg_temp_new_i64();
TCGv_i64 orig = temp_tcgv_i64(arg);
if (is_signed) {
tcg_gen_ext32s_i64(temp, orig);
} else {
tcg_gen_ext32u_i64(temp, orig);
}
arg = tcgv_i64_temp(temp);
}
#endif /* TCG_TARGET_EXTEND_ARGS */

op = tcg_emit_op(INDEX_op_call);

pi = 0;

TCGOP_CALLO(op) = 0;

real_args = 0;
int is_64bit = sizemask & (1 << 2);
if (TCG_TARGET_REG_BITS < 64 && is_64bit) {
#ifdef TCG_TARGET_CALL_ALIGN_ARGS
/* some targets want aligned 64 bit args */
if (real_args & 1) {
op->args[pi++] = TCG_CALL_DUMMY_ARG;
real_args++;
}
#endif
/* If stack grows up, then we will be placing successive
arguments at lower addresses, which means we need to
reverse the order compared to how we would normally
treat either big or little-endian. For those arguments
that will wind up in registers, this still works for
HPPA (the only current STACK_GROWSUP target) since the
argument registers are *also* allocated in decreasing
order. If another such target is added, this logic may
have to get more complicated to differentiate between
stack arguments and register arguments. */
#if defined(HOST_WORDS_BIGENDIAN) != defined(TCG_TARGET_STACK_GROWSUP)
op->args[pi++] = temp_arg(arg + 1);
op->args[pi++] = temp_arg(arg);
#else
op->args[pi++] = temp_arg(arg);
op->args[pi++] = temp_arg(arg + 1);
#endif
real_args += 2;
}

op->args[pi++] = temp_arg(arg);
real_args++;

op->args[pi++] = (uintptr_t)&afl_maybe_log;
op->args[pi++] = flags;
TCGOP_CALLI(op) = real_args;

/* Make sure the fields didn't overflow. */
tcg_debug_assert(TCGOP_CALLI(op) == real_args);
tcg_debug_assert(pi <= ARRAY_SIZE(op->args));

#if defined(__sparc__) && !defined(__arch64__) \
&& !defined(CONFIG_TCG_INTERPRETER)
/* Free all of the parts we allocated above. */
real_args = 0;
int is_64bit = orig_sizemask & (1 << 2);
if (is_64bit) {
tcg_temp_free_internal(args[real_args++]);
tcg_temp_free_internal(args[real_args++]);
} else {
real_args++;
}
if (orig_sizemask & 1) {
/* The 32-bit ABI returned two 32-bit pieces. Re-assemble them.
Note that describing these as TCGv_i64 eliminates an unnecessary
zero-extension that tcg_gen_concat_i32_i64 would create. */
tcg_gen_concat32_i64(temp_tcgv_i64(ret), retl, reth);
tcg_temp_free_i64(retl);
tcg_temp_free_i64(reth);
}
#elif defined(TCG_TARGET_EXTEND_ARGS) && TCG_TARGET_REG_BITS == 64
int is_64bit = sizemask & (1 << 2);
if (!is_64bit) {
tcg_temp_free_internal(arg);
}
#endif /* TCG_TARGET_EXTEND_ARGS */
}

Loading

0 comments on commit 0113c4f

Please sign in to comment.