Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
E
eecs398-search
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
vcday
eecs398-search
Commits
acee5aa5
Commit
acee5aa5
authored
7 years ago
by
aanvi
Browse files
Options
Downloads
Patches
Plain Diff
Added functionalites
parent
a33c0ba5
No related branches found
Branches containing commit
No related tags found
1 merge request
!2
WIP:Crawler parser 2 merge into duplicate url-crawler
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
parser/Parser.cpp
+24
-13
24 additions, 13 deletions
parser/Parser.cpp
parser/Parser.h
+3
-2
3 additions, 2 deletions
parser/Parser.h
util/Tokenizer.h
+16
-8
16 additions, 8 deletions
util/Tokenizer.h
with
43 additions
and
23 deletions
parser/Parser.cpp
+
24
−
13
View file @
acee5aa5
...
...
@@ -7,12 +7,15 @@
* @param inFile
* @return
*/
//TODO instead of grabbing each line, look to see if beginning of
// TODO title/url/anchortext, etc. Then continue until close tag and add to tokenizer after end of tag found
// TODO have to read input in as a stream of chars eventually - cat into string?
// TODO different counts: frequency, total num unique words, etc
// TODO handle bad html style (ie no closing p tag)
//TODO flag different types of words - determine if we want to do this in key of dict or value (in wordData struct)
/*
* Anchor text = #
* Title = *
* Url = @
* Body = %
*/
void
Parser
::
parse
(
string
html
,
Tokenizer
*
tokenizer
)
{
...
...
@@ -63,17 +66,13 @@ void Parser::parse ( string html, Tokenizer *tokenizer )
++
htmlIt
;
}
}
}
/**
* Returns a url, or "" if none
* @param word
* @return
*/
bool
Parser
::
isScript
(
string
word
)
/*
* Returns true if script tag, false if not
*/
bool
Parser
::
isScript
(
string
&
word
)
{
if
(
*
findStr
(
"<script"
,
word
)
!=
'\0'
)
{
...
...
@@ -81,8 +80,11 @@ bool Parser::isScript ( string word )
}
return
false
;
}
string
Parser
::
extract_body
(
string
word
)
/*
* Returns body text if p tags, empty string if not
* If there's no closing tag, stops at the first opening tag or when it hits end of file
*/
string
Parser
::
extract_body
(
string
&
word
,
int
&
offset
)
{
string
body
=
""
;
auto
foundBody
=
findStr
(
"<p"
,
word
)
!=
'\0'
;
...
...
@@ -91,11 +93,20 @@ string Parser::extract_body( string word )
while
(
*
findStr
!=
'<'
)
{
body
+=
*
findStr
;
if
(
*
findStr
==
' '
)
{
count
+=
1
;
}
}
}
return
body
;
}
/**
* Returns a url, or "" if none
* @param word
* @return
*/
string
Parser
::
extract_url
(
string
&
word
)
{
...
...
This diff is collapsed.
Click to expand it.
parser/Parser.h
+
3
−
2
View file @
acee5aa5
...
...
@@ -32,6 +32,7 @@ public:
* Parser
* @return
*/
// TODO need to change vector type to word data, change where struct is declared
const
unordered_map
<
string
,
vector
<
int
>>
*
execute
(
Document
*
document
)
{
Tokenizer
tokenizer
;
...
...
@@ -48,8 +49,6 @@ private:
* @param inFile
* @return
*/
//TODO instead of grabbing each line, look to see if beginning of
// TODO title/url/anchortext, etc. Then continue until close tag and add to tokenizer after end of tag found
void
parse
(
string
html
,
Tokenizer
*
tokenizer
);
...
...
@@ -68,6 +67,8 @@ private:
*/
string
extract_title
(
string
&
word
);
bool
isScript
(
string
&
word
);
string
extract_body
(
string
&
word
);
};
This diff is collapsed.
Click to expand it.
util/Tokenizer.h
+
16
−
8
View file @
acee5aa5
...
...
@@ -27,21 +27,29 @@ public:
return
docIndex
;
}
//add type of word parameter, ie paragraph, url etc
void
execute
(
string
originalText
,
int
offset
)
{
void
execute
(
string
&
originalText
,
int
offset
)
{
vector
<
string
>
splitText
=
splitStr
(
originalText
,
' '
);
string
lower
String
=
""
;
string
processed
String
=
""
;
int
vectorLength
=
0
;
for
(
int
i
=
0
;
i
<
splitText
.
size
(
);
++
i
)
{
lowerString
=
toLower
(
splitText
[
i
]
);
if
(
!
isStopWord
(
lowerString
)
)
for
(
int
i
=
0
;
i
<
splitText
.
size
(
);
++
i
)
{
// case fold
processedString
=
toLower
(
splitText
[
i
]
);
//strip all characters
processedString
=
stripStr
(
processedString
);
if
(
!
isStopWord
(
lowerString
)
)
{
wordData
currentWord
;
// stem word
processedString
=
stem
.
execute
(
processedString
);
wordData
currentWord
;
currentWord
.
offset
=
offset
;
vectorLength
=
(
*
docIndex
)[
lowerString
].
size
(
);
(
*
docIndex
)[
lowerString
].
push_back
(
currentWord
);
//incrementing frequency value of the current word
(
*
docIndex
)[
lowerString
][
vectorLength
-
1
].
frequency
+=
1
;
++
offset
;
}
...
...
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment