依赖未知

在创建 Flink 的 SocketWindowWordCount 例子的时候:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
import java.sql.Time

import org.apache.flink.api.java.utils.ParameterTool
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.windowing.time.Time

/**
* Implements a streaming windowed version of the "WordCount" program.
*
* This program connects to a server socket and reads strings from the socket.
* The easiest way to try this out is to open a text sever (at port 12345)
* using the ''netcat'' tool via
* {{{
* nc -l 12345
* }}}
* and run this example with the hostname and the port as arguments..
*/
object SocketWindowWordCount {

/** Main program method */
def main(args: Array[String]) : Unit = {

// the host and the port to connect to
var hostname: String = "localhost"
var port: Int = 0

try {
val params = ParameterTool.fromArgs(args)
hostname = if (params.has("hostname")) params.get("hostname") else "localhost"
port = params.getInt("port")
} catch {
case e: Exception => {
System.err.println("No port specified. Please run 'SocketWindowWordCount " +
"--hostname <hostname> --port <port>', where hostname (localhost by default) and port " +
"is the address of the text server")
System.err.println("To start a simple text server, run 'netcat -l <port>' " +
"and type the input text into the command line")
return
}
}

// get the execution environment
val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment

// get input data by connecting to the socket
val text: DataStream[String] = env.socketTextStream(hostname, port, '\n')

// parse the data, group it, window it, and aggregate the counts
val windowCounts = text
.flatMap { w => w.split("\\s") }
.map { w => WordWithCount(w, 1) }
.keyBy("word")
.timeWindow(Time.seconds(5))
.sum("count")

// print the results with a single thread, rather than in parallel
windowCounts.print().setParallelism(1)

env.execute("Socket Window WordCount")
}

/** Data type for words with count */
case class WordWithCount(word: String, count: Long)
}

像 Spark Streaming 例子一样,需要在 pom 文件里面添加依赖, 官网的例子中并没有说明如何写 pom 文件, 所以你复制粘贴了源代码还是运行不了,但是官网给了例子 flink-examples ,里面有父级 pom 文件的孩子级 pom 文件,根据这俩 pom 文件,我在 pom.xml 文件中添加的依赖如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
<dependencies>
<!-- Flink dependencies -->

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.11</artifactId>
</dependency>

<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<scope>compile</scope>
</dependency>

<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<scope>compile</scope>
</dependency>

<!-- core dependencies -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_2.11</artifactId>
</dependency>

<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.10_2.11</artifactId>
</dependency>
</dependencies>

右侧面板中的依赖显示的是未知版本:

unknow.png

所以在 maven 仓库 中搜索下 flink-core

1
2
3
4
5
6
<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-core -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>1.5.0</version>
</dependency>

发现必须有 version 版本号:

core-version.png